Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Easier Security'

The Prospects of Easier Security for SMEs and Consumers

PrePrint of 31 December 2014

Published in Computer Law & Security Review 31, 4 (August 2015) 538-552

2nd presentation at a seminar for the Cyberspace Security and Privacy Laboratory (CySPri)
of the UNSW School of Computer Science and Engineering,
Sydney, 14 August 2014

First presentation at a Cyber Security Law Seminar,
Intercontinental Hotel, Sydney, 13 November 2013

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2013-14

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/EC/SSACS.html


Abstract

Safeguards exist that provide at least a reasonable degree of protection for IT security. With so much accumulated knowledge in existence, why are small organisations and consumers hung out to dry? Why aren't consumer devices delivered with convenient security facilities? This paper identifies the reasons why neither users nor technology providers take responsibility for security. It presents a framework within which simple baseline security can be established for small organisations, and consumers can also achieve lower levels of insecurity than currently prevail. It then investigates the scope for interventions to achieve 'easy security' for small organisations and individuals.


Contents


A Scene We'd Like to Be Seen

Nell picked up her new handheld, excited at the prospect of telling her friends about its new features. An avatar appeared on her screen, and introduced itself as Segard. Segard chatted with Nell about the main uses she wanted to make of her handheld, and what address-book she wanted to be loaded onto the device. Segard offered to set a number of defaults on the device that would balance convenience and security about right for Nell. Segard outlined how Nell could change those settings later, and how she could override them.

For Nell, Segard needed to take account of a couple of sensitivities about personal data, particularly health data, and who was to have access to her current location. Nell also wanted to not only keep apart her family and social networks, but also to segregate two incompatible groups of friends. The interactions were just interesting enough that Nell's patience hadn't quite run out before Segard completed the configuration process and relinquished control of the device.

[With thanks to Neal for the loan of one of his characters in 'The Diamond Age' (Stephenson 1995).]


1. Introduction

Large organisations should be capable of undertaking a rational approach to the security of their data and of their information technology (IT) artefacts. In Australia, for example, there are about 6,000 large business enterprises (LBEs) and 6,000 government agencies that are subject to legal requirements in relation to risk management and that are subject to frequent cyber-attacks. In addition, perhaps 25,000 medium-sized business enterprises (MBEs), 50,000 small-to-medium enterprises (SMEs), and even some micro-Enterprises (uEs), have adequate security expertise and reasonable safeguards in place.

Many other organisations, however, despite having considerable dependence on information and IT, have at best a hazy understanding of IT security. In Australia, these number perhaps 50,000 MBEs, 700,000 SMEs, and 250,000 uEs, or about 1 million organisations. Comparable figures for the USA and the EU are 15-20 times those for Australia.

Meanwhile, many millions of individuals use IT artefacts. Particularly since the explosion in smartphone usage since the launch of the iPhone in 2007, and tablet adoption since the launch of the iPad in mid-2010, a lot of people operate multiple IT devices, are greatly attached to them for social purposes, conduct transactions on them that have financial implications, and generate, store and disseminate a considerable amount of data, some of it sensitive.

At any given time, some 2-5% of the population are 'persons-at-risk', whose physical safety is dependent on their location not being apparent to one or more other individuals or organisations that bear a serious grudge against them (Clarke 2001b, UKICO 2009 p.19, GFW 2011). A proportion of these individuals are aware of security risks and take at least some steps to address them. But the large majority of individuals, even of those at risk, are ill-informed, ill-prepared, and exposed.

Individuals' devices now also play a major role within corporations and government agencies, because, as employees and contractors, many individuals use their devices in their workplaces, subject to more or less official Bring Your Own Device (BYOD) arrangements. This has the effect of extending the scope of each organisation's security risks well beyond its own devices to encompass those of its staff-members.

This paper investigates the various ways in which the serious shortfalls in IT security might be overcome. It commences by identifying the reasons why individuals and small organisations fail to protect their own interests. It then outlines the shape that a suitable IT security framework might take, and provides specific proposals relating firstly to a baseline level of security for small organisations, and secondly to three levels of security profile for consumers. The later sections consider the prospects of IT providers addressing the problem, and identify alternative interventions to achieve security outcomes that are effective, efficient, and Nell-friendly.


2. Individual Responsibility for Security

In mature societies, self-protection is an element of functional literacy. People know to take precautions relating to the value of their home, the contents of their home, and their car. Small organisations also understand that it is their responsibility to look after their assets, and that without safeguards they will lose a lot of money. There is also widespread understanding that, to share some kinds of risk around, insurance is needed. The personal and business processes imposed by the insurance industry have the effect of reinforcing the message that property security matters.

Over many decades, the workings of the insurance industry have been adapted to deal with some of the blind-spots in individual self-responsibility. A great many individuals and even many small organisations fail to appreciate that a range of contingent liabilities exists in relation to harm to other people and their property, and that these liabilities may be sufficiently large to lead to bankruptcy. To cope with this, parliaments commonly make third-party personal cover obligatory for car-owners, and public liability cover is incorporated within home and contents insurance.

Whereas security safeguards for the home, its contents and cars are common, the same cannot be said in relation to data and IT artefacts. Many organisations and some individuals have sufficient assets, and are subject to sufficient threats, that considerable care is warranted. An organisation and its directors, if they take no precautions, are readily argued to have failed to fulfil their legal responsibilities. Nonetheless, some organisations and many individuals assume that the risks that they face are sufficiently limited and/or unlikely that they may take quite limited precautions. Meanwhile, some organisations and most individuals simply do not, and will not, even think about security.

IT has always been a mystery to most consumers and indeed to many small organisations. The core of each device is microscopic, and the working parts are complex, intangible and ephemeral. The technologies involved are difficult for most people to even conceive, and few grasp them in sufficient depth to enable them to conduct risk assessment and to design and implement risk management plans. Hence, even in the current, fourth decade of 'personal computing', IT remains mysterious. Moreover, the IT mystery is deepening still further, as general-purpose computing devices are swamped by limited-function appliances, and data and processing disappear into the cloud (Clarke 2011). This results in a lack of awareness among consumers about the security risks that arise from their use of IT. Small organisations also lack expertise in relation to the hardware, networks, systems software and applications software that they use, and even in relation to the associated personal and business processes.

Further barriers exist. Many users have strong tendencies towards hedonism and away from considered, reflective and responsible attitudes towards the use of their devices. Security features intrude into users' enjoyment of their devices, because they require a considerable degree of understanding and concentration in order to approve the installation of software, changes to terms of service, and changes to settings, and the explanations provided are commonly incomprehensible to most users. In addition, it is entirely rational for users to value convenience highly - because they experience it continually - and to value security very low - because they experience the impacts of insecurity only occasionally and are largely unaware of the security incidents that affect their devices, their transactions and their communications.

Given that safeguards involve certain costs, but unseen and uncertain benefits, it is unsurprising that individuals and small organisations under-spend on security. For individual responsibility to become a significant factor in addressing the problem of inadequate IT security, a large number of conditions would need to be fulfilled. IT would need to be more transparent. There would need to be widespread awareness, education and training. Enough individuals and small organisations would need to incur liabilities, such that the public generally would come to appreciate the need for self-protection. And IT security safeguards would need to become much more transparent, would have to be inexpensive, and would have to be easy to understand, install, configure, use and update.

This constellation of conditions is unlikely to be satisfied even in the long term, so solutions to the problem must be sought elsewhere. The following section provides a basis for the remainder of the paper, by outlining the shape of a future world in which reasonable levels of IT security are achieved.


3. The Shape of User-Friendly Security Solutions

This paper adopts the conventional security model, using the terminology explained in Appendix 1. Under this model, security cannot be an absolute, because there are too many threats and vulnerabilities, and ensuring protection against them costs resources. A working definition of the objective is:

To avoid, prevent or minimise harm arising from environmental incidents, accidents and attacks, avoiding harm where practicable, and coping with harm when it arises, by balancing the reasonably predictable financial costs and other disbenefits of safeguards against the less predictable and contingent financial costs and other disbenefits arising from security incidents

This section first considers how the challenges facing small organisations can be addressed, and then switches the attention to individuals.

3.1 Simple Baseline Security for Small Organisations

This section considers approaches whereby the objective declared immediately above can be achieved by SMEs that have limited IT understanding and expertise. In the remainder of this paper, the term 'small organisations' is used, in order to encompass both for-profit enterprises and not-for-profit associations.

Some security safeguards are so fundamental that any organisation is at serious risk of being found negligent if it does not implement them. An effective means of causing small organisations to take action is to define a minimum set of measures, and create a requirement that they be deployed. The term 'baseline' is commonly used to refer to such a minimum set of measures.

Guidance is available in a variety of computer science text-books, in International Standard ISO 27002:2013, in the payments industry standard for organisations that handle credit cards (PCI-DSS 2013) and in publications by government agencies. In Australia, for example, the primary resources are the Defence/Australian Signals Directorate (DSD/ASD)'s Information Security Manual (ISM 2013), DSD/ASD's Mitigation Strategies (ASD 2013), DBCDE (2013a) and OAIC (2013, pp. 15-23). Generally, these documents are substantial, terse, and addressed to organisations that have staff with professional security expertise. The challenge is to scale the requirements to the realities of small organisations, and to present them in a form understandable by their non-specialist staff. A rare instance of valuable guidance from a professional association is RACGP (2013).

In the USA, FIPS Publication 200 stipulates 'Minimum Security Requirements for Federal Information and Information Systems' (NIST 2006). This requires that executive agencies implement "minimum security requirements" in "seventeen security-related areas" - effectively a set of baseline security safeguards defined in a somewhat abstract manner - with detailed "guidelines for selecting and specifying security controls for information systems" provided in SP 800-53 (NIST 2013). An alternative set of 'critical security controls' is defined in (SANS 2014).

In the EU, the Cybersecurity Directive passed in March 2014 requires each EU country to empower and resource a government agency and to develop a national cybersecurity strategy (EU 2014). This will take several years to be implemented, and falls a long way short of imposing specific security requirements. Although the Directive includes a statement that "Minimum security requirements should also apply to at least certain market operators of [critical] information infrastructure" (Recital 4), the document fails to provide any indication of what those requirements might be. A guidance document published by the European Union Agency for Network and Information Security (ENISA 2014) is addressed only to "undertakings providing public communications networks", and it offers a checklist of less specificity than those of NIST (2013) and ISM (2013), and remains too vague to constitute a baseline.

The relevant UK agency, the CESG division of Government Communications Headquarters (GCHQ), appears not to have published any useful guidance. Meanwhile, a government survey confirmed that, in the cybersecurity arena as in so many others, there was a plethora of standards to choose from (DBIS 2013). A cut-down version of risk assessment guidelines exists, in the form of IASME (2013), but this is not specific about security safeguards. In Germany, a catalogue of safeguards is available, comparable with the US NIST and Australian ASD publications (BSI 2013), but no baseline is defined.

These documents provide a wealth of information for researchers, and for security specialists within large organisations; but they are too voluminous and too technically detailed to be of any use to small organisations. As part of this project, a list of safeguards was prepared and is proposed in Table 1 as a working draft of a baseline set of safguards. The selection incorporates items that are widely regarded in the IT industry as being fundamental, and reflects priorities indicated by commercial and governmental sources, e.g. it incorporates various of the requirements in ISM (2013) and NIST (2006, pp. 2-4), a number of the top 20 security controls (SANS 2014), and three of the top four cyber-intrusion mitigation strategies recommended by ASD (2013). Because the primary sources are oriented towards large organisations, care was needed to lower the expectation levels and produce a practicable sub-set that achieves reasonable levels of assurance.

Table 1:
An Absolute-Minimum Set of Information Security Safeguards

After Clarke (2013b)

  1. PHYSICAL SAFEGUARDS for all processing, storage and access devices
  2. ACCESS CONTROL, including:
  3. MALWARE DETECTION AND ERADICATION
    [Malware is used here as a comprehensive generic term,
    encompassing viruses, worms, spyware, bots, rootkits, etc. - Clarke 2010)]
  4. PATCHING PROCEDURES, to ensure the frequent application of all security-relevant updates and patches to all systems software and application software
  5. FIREWALLS, in order to limit the scope for unauthorised individuals to gain access to and control over devices within the organisation
  6. INCIDENT MANAGEMENT PROCESSES
  7. LOGGING
  8. BACKUP AND RECOVERY
  9. TRAINING, including:
  10. RESPONSIBILITY for the security of information and IT infrastructure allocated to a sufficiently senior staff-member, who has the authority and the resources to fulfil that responsibility

Some of the safeguards in Table 1, in particular items 2-7, require that IT products and services be delivered with particular technical functionality installed, configured, and default-on. All involve (reasonably straightforward) organisational policies plus business processes, which explain how to use the technical features to achieve a baseline level of protection against many of the most common accidents and exploitations of vulnerabilities. Item 9 also requires the provision of access to basic training materials.

The flavour of the particular set of safeguards in Table 1 reflects its author's long background in the use of desktop and portable devices. Organisational norms now reflect the 'untethered' nature of wireless communications. An alternative approach that is more consistent with the era of smartphones and tablets can be developed from an examination of BYOD policies, and Mobile Device Management / Mobile Application Management (MDM/MAM) tools. The features implemented by leading, IT-savvy organisations could be studied, in order to check and improve the draft baseline feature-set.

Another approach that could be adopted is to undertake generic risk assessment, establish generic risk management strategies, and apply them directly to small organisations. However, given the diversity of circumstances, some segmentation of the market for security guidance would be essential. A possible framework for such a study is provided by Clarke (2008), which addressed the context of mobile payments. This catalogued threats and vulnerabilities relevant in the context of mobile payments, and analysed risk within the following framework:

The analysis presented in this section demonstrates the feasibility of establishing a framework within which baseline security can be specified, and can be imposed on, and implemented by, small organisations.

3.2 A Comprehensive Security Approach for Consumers

There are considerably greater difficulties in achieving a result for individuals similar to that proposed above for small organisations. Consumer devices are massively insecure. The situation was summarised in Clarke & Maurushat (2007) as follows:

The subsequent explosion in smartphones and tablets has given rise to widespread use of 'apps' for functions previously performed using browsers, and hence the expression requires adaptation. The vulnerabilities, on the other hand, are little-reduced and in some cases they have been exacerbated.

As noted in Clarke & Maurushat (2007), "safeguards are available that address some of the threats and vulnerabilities. However, these safeguards:

"In order to take advantage of each particular safeguard, the user must do the following:

"Worse still, after the consumer has gone to all of that trouble, the safeguards are of limited effectiveness, because:

The proposals for IT-ignorant organisations in the preceding section are in principle relevant to consumer contexts, but the solutions need to be scaled, and the deployment process considerably simplified. In addition to the references cited in earlier sections, a modest literature exists written in terms intended to be accessible to consumers, e.g. DBCDE (2013b), Kissell (2014), EPIC (2014), Zhong (2014).

Consumers generally prioritise pleasure and convenience well above security. For safeguards to be adopted - or if they are default-on, then left in place - attention needs to be paid to usability. A considerable 'usable security' literature exists, e.g. Nielsen (1993), Whitten & Tygar (1999), Garfinkel & Miller (2005), Clark et al. (2007), Camp (2013). A summary is in Clarke (2014). The principles enunciated in that literature need to be applied.

The sheer number of security-relevant features that require configuration is a particularly significant barrier. It is untenable for consumers to have to understand a large number of concepts, and to manually select a large number of setting. The difficulties are compounded by consumer-hostile designs, such as settings unnecessarily scattered across many locations and levels, and by frequent changes by providers to settings, and to the implications of settings. It is therefore necessary for devices to be configured automatically, and the number and complexity of user-interactions minimised. An appropriate approach to achieve that end is to define a small set of Security Profiles, comprising software installations, configurations and settings. Three such profiles are outlined in Table 2.

Table 2: A Minimalist Set of Security Profiles

However, the notion of a set of Security Profiles is insufficient by itself. It needs to be complemented by four further features. Firstly, the choice of a particular Profile must be associated not with a device, nor even with a user-account, but rather with a combination of user-account and particular categories of use. Hence, for example, Nell needs to have configured for her low-level security for casual social media, but mid-level security for financial transactions.

The second feature needed is support for multiple identities, each of which can have different Profiles defined for particular categories of sensitive transactions and communications. Examples of circumstances in which individuals want to have distinct identities include networked games, discussion fora for games, personal social media, enterprise social media, and transactions with associations, corporations and government agencies.

The third of the four complementary features is a comprehensive security configuration management regime. The consumer needs the ability to establish an initial configuration when they acquire a new device. Structured menus of settings, supported by defaults and explanations, will be suitable for a proportion of users, and demanded by some. For most, however, a 'user-friendly wizard', as depicted in the introductory vignette, would be essential. In addition, the user's profile needs to be sustained when updates occur, and the user's adjusted profile needs to be mirrored, and recovered when needed.

Finally, it would be impracticable to fully lock down a consumer device, because consumers would treat the security features as foe rather than as friend, and find ways to subvert them. Hence many settings need to be able to be overridden, both generally, but particularly for specific transactions or communications.

Declaring Profiles to be Low, Medium or High Security is meaningless without an indication of the safeguards that are to be associated with each of them. In order to offer a proposal, a comprehensive list of mainstream safeguards was first prepared, drawing in particular on ISM (2013, pp. vi-ix), but also reflecting NIST (2013, pp. D-2 to D-8) and SANS (2014).

The list of safeguards was then split into three groups corresponding to the three Profiles, as indicated in Table 3.

Table 3: Safeguards associated with the Minimalist Set of Security Profiles

The analysis in this section suggests that practicable baseline-security and enhanced-security are much more challenging in the consumer than in the organisational contexts, but that stratifying safeguards into Security Profiles offers a way forward.


4. Technology Provider Responsibility for Security

Individual responsibility cannot be relied upon, in respect of either people or small organisations. On the other hand, a practicable framework for user-friendly solutions is not out of reach. This section considers whether and how IT providers can deliver such capabilities.

Business organisations don't lightly take on a responsibility to help the infirm. Their reason for existing is to make money for their investors. Measures that they invest in must contribute and be seen to contribute to making money, or at least be perceived as 'a cost of being in the game'. If an argument is to be mounted for IT providers to make security easy for their customers, then it is incumbent on the proponent to explain why they should do so. Considerable challenges are involved (Anderson 2001), a summary of which is provided in Table 4. For these reasons, a technology provider's business case for investing in security needs to be strong.

Table 4: Challenges Confronting Technology Providers

Several implementation flavours of 'Enterprise Mobility' solutions are possible, each of which endeavour to balance security and useability for corporate users (Winterford 2013, pp. 8-13):

Could such Enterprise Mobility solutions mature into effective solutions for IT-savvy organisations, and then be productised for IT-amateur organisations, and gradually become available and even mainstream for consumers? Unfortunately, the history of computing suggests mixed experiences with dependence on such a 'trickle-down effect'. For example, although 'anti-virus software' is available, and has (slowly) expanded its scope to address additional forms of malware, even that sub-set of threats is not fully, conveniently and all-but-automatically addressed on all devices. After well over 30 years of desktop computing, and almost 30 years of portable/laptop computing, broader 'defensive computing' tools haven't arrived even in those segments, let alone in the still rapidly evolving smartphone and tablet environments.

Major challenges arise from the facts of ongoing technological developments, and highly competitive marketplaces, which together ensure that change is very rapid. Moreover, IT providers' business models are predicated on short product-lives of at most 2-3 years, and rapid re-cycling of customers. In this environment, the technical contexts for which solutions are devised frequently disappear soon after the solutions are deployed. If and when IT reaches a relatively stable plateau, a market-driven solution may emerge. On the other hand, security issues are already serious. It is untenable for millions of organisations and individuals to await that eventuality.

Can the market provide productised solutions that implement something like the features and services outlined in the preceding sections and the Appendices? And, if so, will they be affordable and accessible by the millions of small organisations and individuals that are the focus of this article? As a result of the challenges identified in Table 4, providers of IT products and services face considerable costs in achieving reasonable degrees of security and of security-friendliness. Moreover, it's far from clear that enough customers will recognise the value of using a security-conscious vendor. Without such an appreciation, then they are unlikely to pay a sufficient margin to enable providers to recoup the investment that is needed to deliver safer computing environments.

A gap therefore exists between what's needed and what exists. Even conservative economists agree that such market failures need to be addressed through interventions of some kind, preferably of a stimulatory nature, but where necessary through regulatory measures. The following section considers the scope for targeted interventions to give rise to the delivery of user-friendly security solutions for small organisations and consumers.


5. Interventions to Address Market Failure

Market failure is evident, and a significant problem is not being addressed. The necessary conditions therefore exist to justify intervention. This section briefly assesses the several possible forms that intervention can take, commencing with 'incentivation' approaches and working through various ways in which requirements may be imposed on technology providers.

5.1 Stimulatory Measures

Some government agencies provide useful guidance for user organisations (e.g. DBCDE 2013a) and for consumers (e.g. DBCDE 2013b); but the context within which they are published creates no expectation that IT providers need take any responsibility for even facilitating, let alone automating, the vital security safeguards described in the documents.

A first approach available to all governments is to apply 'moral suasion', which conveys the prospect of firmer measures should industry fail to respond to the nation's needs. Separately, or in conjunction with moral suasion, joint government-industry programs can be developed, with government agencies making 'contributions in kind', e.g. by funding the State's educational institutions to run particular courses. Stronger 'sweeteners' may be offered, such as targeted project funding for research, industrial research and development, awareness, education and/or training.

Alternatively, governments may perceive the need to be of sufficient gravity that at least some aspects may be stimulated by means of subsidies to IT providers, or commissions to deliver specified features and services. While none of these approaches is by itself sufficient to achieve the objective declared earlier in this paper, a judiciously selected bundle of them may provide the catalyst needed to overcome industry and user inertia.

5.2 Industry 'Self-Regulation'

A further possibility for intervention to cope with market failure is recognition by industry as a whole, by one or more particular industry associations, or by one or more professional associations, that an initiative along the lines outlined in this paper is essential, followed by action to bring it into being.

To date, standardisation activities have been largely limited to 'process standards' of the 'quality seal-of-approval' variety, in particular the ISO 31000 series on generic risk management processes, the ISO 27000 series on IT Risk Management processes, scaled business processes such as IASME (2014), and to some extent NIST (2011, 2012). One possibility would be for Standards Associations to move beyond process aspects and specify technical requirements. ISO 27002:2013 is the closest that international standards have come to meeting this need, and it neither reflects the realities of small organisations nor stipulates a baseline. In any case, industry is generally not effective in engaging with other stakeholders as part of standards formation processes. If representatives of small organisations and of consumers are absent, or have limited influence, it is unlikely that requirements will be specified that would tend to intrude into providers' freedom to deliver insecure products.

Similarly, no signs of momentum towards facilities like those discussed in this paper are apparent in professional associations internationally, e.g. the Association for Computing Machinery (ACM), the Institute of Electrical and Electronics Engineers (IEEE), the Information Systems Security Association (ISSA) and the League of Professional System Administrators (LOPSA), nor in Australia (e.g. the Australian Computer Society (ACS), the Internet Society of Australia (ISOC-AU), the System Administators Guild of Australian (SAGE-AU), and the Australian Information Security Association (AISA).

5.3 'Co-Regulation'

A regulatory model that has been much-discussed, but seldom effectively implemented, is referred to as 'co-regulation' (Clarke 1999). It involves a 'light touch' legislative framework that creates the scope for enforceable Codes to be established. In practice, however, such schemes as exist have generally been developed by industry sectors rather than negotiated among all stakeholders. Consumers are seldom well-represented in the development of such Codes, due to the lack of funding for analysis, preparation of submissions, and participation in events. Moreover, even when they are present, they have limited market power to achieve their objectives in relation to the Codes' nature, structure and content.

A variety of regulatory and oversight agencies have the legal capacity to engage with industry and other stakeholders in order to negotiate effective, enforceable Codes. Unfortunately, the track-record of such agencies is very disappointing. One such oversight agency is the Office of the Australian Privacy Commissioner (OAPC) - which, during 2011-2014, was incorporated within the Office of the Australian Information Commissioner (OAIC) - and its activities in the information security area are indicative of the problems.

In 2001, as the private sector provisions of the Privacy Act (Cth) came into effect, the OAPC published a 'Guide to Information Security'. This was meant to assist organisations to comply with the security safeguards Principles within the country's data protection law. With exceptions in the health care sector, all organisations to whom the Guide was addressed were large organisations in the sense used in this paper. The relevant Principles were Information Privacy Principle 4 (affecting the public sector 1988-2014) and National Privacy Principle 4 (affecting most of the private sector 2000-14). With effect from March 2014, these Principles were superseded by Australian Privacy Principle 11 (applying to both the public and privacy sectors).

In 2013-14, OAPC undertook a revision of the Guide. It appeared that the opportunity existed to reflect the substantial changes in technologies, threats and vulnerabilities that had occurred during the intervening decade. Submissions (APF 2012, Clarke 2013a, APF 2013, APF 2014) proposed that the revised edition of the Guide should:

Instead, the revised version contained only quite modest amendments to the 2001 document (OAIC 2013). The document remains highly vague, with many uses of 'appropriate' (34 occurrences) and 'reasonable' (74 occurrences). It provides no indication of mandatory requirements, but merely discusses some 'steps and strategies which may be reasonable to take'. It includes brief mentions of a number of specific measures (such as access control, firewalls and vulnerability scanning), but all are merely factors to consider. The opportunity for the Privacy Commissioner to have a material impact on the demonstrably low standards of data security in Australia was spurned.

Even where data protection oversight agencies have the capacity to approve industry Codes, the mechanism is not achieving the objective of ensuring that adequate IT security safeguards are implemented. In the Australian case, it appears highly unlikely that the additional powers that the Commissioner gained in March 2014 will make any difference, because there is little incentive for industry associations to initiate Codes, and the Commissioner has no track-record of forcing the issue with industry sectors. The notion of co-regulation appears far less promising than it once did, because its potential has never yet been realised.

5.4 Formal Law

Inadequate IT security has negative impacts in a range of contexts. In many jurisdictions, statutory obligations have been enacted relating to particular industry sectors, or particular categories of data. For example, specific requirements are commonly imposed on the financial services industry and on organisations responsible for 'critical infrastructure' (such as ports, airports, energy production and transmission, and telecommunications). Since 1970, most countries in the world have also legislated provisions in respect of data relating to an identifiable person. Reviews of security requirements are in Conradi (2007), Turle (2009), Dayarathna (2009) and Guerkaynak et al. (2014).

All data protection laws contain a security principle, and in some jurisdictions the regulatory or oversight agency has some capacity to force organisations to implement safeguards. In almost all cases, however, the Principle is expressed as a vague prescription (e.g., in the words applicable to the Australian federal public sector 1989-2014, "[there must be] security safeguards ... against loss, against unauthorised access, use, modification or disclosure, and against other misuse"), and the Principle is subject to a qualification of highly uncertain scope (e.g. "such security safeguards as it is reasonable in the circumstances to take"). The Australian oversight agency provides modest guidance, e.g. indicating that what is 'reasonable' depends on a variety of factors, including the amount and sensitivity of the personal information, the nature of the entity, the possible adverse consequences for an individual, cost, and whether a security measure is in itself privacy invasive. As indicated in the previous section, despite the gathering of a quarter-century of experience and the overwhelming evidence that organisational security practices are seriously deficient, oversight agencies continue to avoid articulating the vague principles into actionable advice.

Similar failure is evident within the EU. The 1995 EU Directive, at Article 17 (Security of processing), requires "appropriate technical and organizational measures to protect personal data against accidental or unlawful destruction or accidental loss, alteration, unauthorized disclosure or access ...", with the following guidance as to the meaning of "appropriate": "having regard to the state of the art and the cost of their implementation, such measures shall ensure a level of security appropriate to the risks represented by the processing and the nature of the data to be protected". However, neither the Article 29 Working Party (Art. 29 WP) of the European Commission nor the European Data Protection Supervisor have provided interpretations of the security safeguards principle that have the effect of even articulating the vague principle, let alone establishing a baseline set of measures.

An indicator of failure is the ongoing obsession in public discourse with 'data breach notification' laws. The movement began in California in 2003 and swept across virtually all US States. In a country that has steadfastly prioritised the freedom of corporations over the privacy rights of individuals, the intentions of these laws were to provide individuals with the opportunity to be informed and perhaps take steps to ameliorate personal harm arising from the breach, and to render transparent the poor performance of organisations in relation to the protection of personal data, and hence embarrass them into improving safeguards. Rather than being a form of data protection law, data breach notification was, and remains, an excuse to not enact data protection law, or to not enforce existing law. Burdon et al. (2012) argued that, particularly when teleported outside the US context, such laws are conceptually incoherent, limited in scope, and inconsistent with data protection laws, as well as being demonstrably ineffective.

A decade later, proposals for data breach notification laws appear even more vacuous than before, because it is abundantly clear both that organisations do not implement adequate security safeguards and that embarrassing them makes little difference. Even successions of instances in which breaches have cost corporations tens to hundreds of millions of dollars appear to have resulted in little improvement in corporate performance (Clarke 2014b). The accumulation of embarrassments has, however, spurred some parliaments into action. For example, since 2007 the UK Information Commissioner has had the capacity to levy moderate fines on organisations that "deliberately or recklessly" breach data protection law. It has done so. Fo example, its web-site discloses that, during 2014, it levied Stg 870,000 on 8 organisations. In many countries, however, meaningful sanctions for lax security are discussed rather than enacted, and in some the regulatory agency fails to exercise the powers that they have available to them. Given the laxness of the regulatory frameworks affecting large organisations, it is unsurprising that data protection laws have little impact on the security decisions made by technology providers and by small organisations, still less on consumers.

A range of other laws have the capacity to influence organisations' attitudes to security, such as the tort of negligence and consumer protection laws. However, although the IT industry is over 60 years old, the legal framework that applies to it remains immature. Hardware is subject to laws regarding merchantability and product liability, but software generally is not subject to those laws, unless it is intrinsic to the hardware.

Most computers have in the past been sold as general-purpose devices. Providers have thereby generally escaped liability for even seriously inadequate software, unless it is so severe as to represent negligence, or is in breach of terms of contract. There has been a rapid shift during the current decade towards the sale of computer-based appliances whose functions are restricted to a very few specific applications. This has the democratically undesirable effect of denying consumers access to general-purpose computing devices. However, it may also have the positive effect of removing the manufacturer's exemption from product liability laws. If so, then class actions against IT providers relating to device insecurity might force them to take much greater care with their design of features and default settings. Alternatively, IT providers may continue to escape that responsibility on the basis that the devices have at least some degree of extensibility through the download of 'apps', and/or because they have vastly greater market power than consumers, or avoid having a local footprint in jurisdictions with strong consumer rights laws, strong regulators and active litigants.

Law reform was considered a quarter-century ago (e.g. Clarke 1989), and has been considered from time to time since, but with virtually no outcomes. It is entirely feasible for parliaments to enact specific organisational responsibilities and associated sanctions, in order to achieve the objective declared earlier in this paper. However, such proposals have seldom come before legislatures, and information and IT security appear not to be perceived by governments and public servants as being sufficiently important to warrant action.


6. Conclusions

Security isn't easier for small organisations and consumers because the drivers for individual responsibility are too weak to overcome the impediments, and this problem is matched by market failure, and compounded by regulatory failure. Devices used, and depended on, by millions of small organisations and consumers are seriously insecure.

This paper has reviewed the ways in which longstanding market failure can be overcome, including stimulatory measures, industry 'self-regulation', 'co-regulation' and formal law. A number of scenarios can be imagined which would provide the impetus needed for progress to be made towards easier security, in particular by means of baseline security for small organisations, and suites of inbuilt security features in consumer products.

The power of the nation-state is on the wane, and corporations are increasingly large, powerful and transnational. Individual States are limited in their capacity to impose consumer protective measures on corporations whose primary business activities are elsewhere, and are timid in doing so. The USA is the primary country of domicile of supranationals, and it consistently favours corporate freedoms over the protection of consumers' privacy. The US Administration has been seeking to impose its world-view on other countries by means of provisions in international trade agreements whose effect is to greatly reduce national sovereignty and preclude most countries from enacting laws to the detriment of supranational corporations. Meanwhile, although the EU occasionally rattles its sabre, it continually drops short of its stated intentions, as occurred with its meek acceptance of the US 'Safe Harbor' scheme and of transborder flows of financial data and passenger data. On the one hand, it would seem that the frequency and seriousness of data breaches is building the momentum needed for change, and that the hands of legislators and regulatory and oversight agencies will be forced. On the other hand, the scope for meaningful legislative action is progressively diminishing.

The more probable scenario is that the level of harm to organisations' own interests may become so great that they may discover the need to take action themselves, and to fund technology providers to improve their offerings. The current vogue among large corporations is to outsource substantial proportions of economic activity to smaller organisations whose profit-margins they are able to squeeze. One result of this is that their own security risk profile becomes to at least some extent dependent on those of their contractors. In particular, organisational vulnerabilities arising from BYOD practices may cause organisations to fund security features for consumer devices, and security training for their staff in the use of consumer devices. If some variant of this scenario does emerge, how long will the trickle-down effect take, and when will Nell's device gain a reasonable level of protection?


Appendix 1: The Conventional Security Model

Security is a condition in which harm does not arise, because threats and vulnerabilities are countered by safeguards. The conventional computer security model is adopted in this paper (e.g. Clarke 2001a, OECD 2002, ISO 2005). Under this model:

Glossary


Appendix 2: Baseline Security Features
Low Security / High Convenience

User Accounts

Internet Traffic Controls

Executables Controls

Storage Controls

Settings Controls

Backup


Appendix 3: Additional Security Features
Medium Security / Medium Convenience

User Accounts

Internet Traffic Controls

Executables Controls

Storage Controls

Settings Controls

Backup


Appendix 4: Further Secure Features
High Security / Low Convenience

User Accounts

Internet Traffic Controls

Executables Controls

Storage Controls

Backup

Security Assurance


References

Anderson R. (2001) 'Why Information Security is Hard - An Economic Perspective' Proc 17th Annual Computer Security Applications Conference, (ACSAC), New Orleans, 10 to 14 December 2001, at https://www.acsac.org/2001/papers/110.pdf

APF (2012) 'Information Security' Policy Statement, Australian Privacy Foundation, December 2012, at http://www.privacy.org.au/Papers/PS-Secy.html

APF (2013) 'Revised Guide to Information Security' Submission to the Privacy Commissioner, Australian Privacy Foundation, January 2013, at http://www.privacy.org.au/Papers/OAIC-InfoSecy-1301.pdf

APF (2014) 'Revised Guide to Information Security' Submission to the Privacy Commissioner, Australian Privacy Foundation, August 2014, at http://www.privacy.org.au/Papers/OAIC-InfoSecy-1408.pdf

ASD (2013) 'Strategies to Mitigate Targeted Cyber Intrusions' Australian Signals Directorate, April 2013, at http://www.dsd.gov.au/infosec/top35mitigationstrategies.htm

BSI (2013) 'IT-Grundschutz-Kataloge' Bundesamt fuer Sicherheit in Informationstechnik, 2005-13, at https://www.bsi.bund.de/DE/Themen/ITGrundschutz/ITGrundschutzKataloge/Inhalt/_content/kataloge.html

Burdon M., Lane B. & von Nessen P. (2012) 'Data breach notification law in the EU and Australia - Where to now?' Computer Law & Security Review 28, 3 (2014) 296-307

Camp L.J. (2013) 'Beyond usability: Security Interactions as Risk Perceptions' Proc. Workshop on Risk Perception in IT Security and Privacy, July 24-26, 2013, Newcastle, UK, at http://cups.cs.cmu.edu/soups/2013/risk/RiskWksp_Translucent.pdf

Clark J., van Oorschot P.C. & Adams C. (2007) 'Usability of Anonymous Web Browsing: An Examination of Tor Interfaces and Deployability' Proc. Symposium On Usable Privacy and Security, July 18-20, 2007 Pittsburgh, at http://cups.cs.cmu.edu/soups/2007/proceedings/p41_clark.pdf

Clarke R. (1989) 'Who Is Liable for Software Errors? Proposed New Product Liability Law in Australia' Computer Law & Security Report 5, 1 (May-June 1989) 28-32, at http://www.rogerclarke.com/SOS/PaperLiaby.html

Clarke R. (1999) 'Internet Privacy Concerns Confirm the Case for Intervention' Communications of the ACM 42, 2 (February 1999) 60-67, at http://www.rogerclarke.com/DV/CACM99.html

Clarke R. (2001a) 'Introduction to Information Security' Xamax Consultancy Pty Ltd, February 2001, at http://www.rogerclarke.com/EC/IntroSecy.html

Clarke R. (2001b) 'Research Challenges in Emergent e-Health Technologies' Xamax Consultancy Pty Ltd, July 2001, at http://www.rogerclarke.com/EC/eHlthRes.html#PAR

Clarke R. (2008) 'A Risk Assessment Framework for Mobile Payments' Proc. 21st Bled eCommerce Conf., June 2008, pp. 63-77, at http://www.rogerclarke.com/EC/MP-RAF.html

Clarke R. (2010) 'Re-Conceptualising Malware' Xamax Consultancy Pty Ltd, February 2010, at http://www.rogerclarke.com/II/RCMal.html

Clarke R. (2011) 'The Cloudy Future of Consumer Computing' Proc. 24th Bled eConference, June 2011, at http://www.rogerclarke.com/EC/CCC.html

Clarke R. (2013a) 'Submission re the OAIC Guide to Information Security' Xamax Consultancy Pty Ltd, January 2013, at http://www.rogerclarke.com/DV/OAIC-ISGuide-130104.pdf

Clarke R. (2013b) 'Information Security for Small and Medium-Sized Organisations' Xamax Consultancy Pty Ltd, January 2013, at http://www.xamax.com.au/EC/ISInfo.pdf

Clarke R. (2013c) 'eConsumer Insecurity: Five Sensationalist Headlines, and Why They're True' Presentation to the Wirtschaftsinformatik Forum, University of Koblenz-Landau, Xamax Consultancy Pty Ltd, January 2013, at http://www.rogerclarke.com/EC/eCIS.html

Clarke R. (2014a) 'Key Factors in the Limited Adoption of End-User PETs' Xamax Consultancy Pty Ltd, April 2014, , at http://www.rogerclarke.com/DV/UPETs-1405.html#PU

Clarke R. (2014b) 'Vignettes of Corporate Privacy Disasters ' Xamax Consultancy Pty Ltd, December 2014, , at http://www.rogerclarke.com/DV/PrivCorp.html

Clarke R. & Maurushat A. (2007) 'The Feasibility of Consumer Device Security' J. of Law, Information and Science 18 (2007), PrePrint at http://www.rogerclarke.com/II/ConsDevSecy.html

Conradi M. (2007) 'Legal developments in IT security' Computer Law & Security Report 23, 4 (2007) 265-369

Dayarathna R. (2009) 'The principle of security safeguards: Unauthorized Activities' Computer Law & Security Review 25, 2 (2009) 165-172

DBCDE (2013a) 'Stay Smart Online - Business' Dept of Broadband Communications and the Digital Economy, 2013, at http://www.staysmartonline.gov.au/business

DBCDE (2013b) 'Stay Smart Online - Home Users' Dept of Broadband Communications and the Digital Economy, 2013, at http://www.staysmartonline.gov.au/home_users

DBIS (2013) 'UK Cyber Security Standards' UK Department for Business Innovation & Skills, Research Report, November 2013, at https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/261681/bis-13-1294-uk-cyber-security-standards-research-report.pdf

ENISA (2014) 'Technical Guideline on Minimum Security Measures' European Union Agency for Network and Information Security, version 2, 24 October 2014, at http://www.enisa.europa.eu/activities/Resilience-and-CIIP/Incidents-reporting/technical-guideline-on-minimum-security-measures/technical-guideline-on-minimum-security-measures/at_download/fullReport

EPIC (2014) 'EPIC Online Guide to Practical Privacy Tools' Electronic Privacy Information Center', 2014, at https://epic.org/privacy/tools.html

EU (2014) 'Directive  of the European Parliament and of the Council concerning measures to ensure a high common level of network and information security across the Union (2013/0027(COD)), 13 March 2014, at http://www.europarl.europa.eu/sides/getDoc.do?type=TA&language=EN&reference=P7-TA-2014-0244

Garfinkel S. & Miller R.C. (2005) 'Johnny 2: A User Test of Key Continuity Management with S/MIME and Outlook Express' Symposium On Usable Privacy and Security, July 6-8, 2005 Pittsburgh, at http://cups.cs.cmu.edu/soups/2005/2005proceedings/p13-garfinkel.pdf

GFW (2011) 'Who is harmed by a "Real Names" policy?' Geek Feminism Wiki, undated, apparently of 2011, at http://geekfeminism.wikia.com/wiki/Who_is_harmed_by_a_%22Real_Names%22_policy%3F

Guerkaynak G., Yilmaz I. & Taskiran N.P. (2014) 'Protecting the communication: Data protection and security measures under telecommunications regulations in the digital age' Computer Law & Security Review 30, 2 (2014) 179-189

IASME (2013) 'Information Assurance For Small And Medium Sized Enterprises' IASME Standard v. 2.3, March 2013, at https://www.iasme.co.uk/images/docs/IASME%20Standard%202.3.pdfhttps://www.iasme.co.uk/images/docs/IASME%20Standard%202.3.pdf

ISM (2013) 'Information Security Manual' Defence Signals Directorate, August 2013, at http://www.dsd.gov.au/infosec/ism/index.htm

ISO (2005) 'Information Technology - Code of practice for information security management', International Standards Organisation, ISO/IEC 27002:2005

Kissell J. (2014) 'Take Control of Your Online Privacy' AgileBits, March 2014, at http://email.agilebits.com/t/r-l-ckltdlt-kjiuxtlh-t/

Nielsen J. (1993) 'Usability engineering' Morgan Kaufmann, 1993

NIST (2006) 'Minimum Security Requirements for Federal Information and Information Systems' National Institute of Standards and Technology, Federal Information Processing Standard FIPS 200, March 2006, at http://csrc.nist.gov/publications/fips/fips200/FIPS-200-final-march.pdf

NIST (2011) 'Managing Information Security Risk: Organization, Mission, and Information System View' National Institute of Standards and Technology, Special Publication SP 800-39, March 2011, at http://csrc.nist.gov/publications/nistpubs/800-39/SP800-39-final.pdf

NIST (2012) 'Guide for Conducting Risk Assessments' National Institute of Standards and Technology, Special Publication SP 800-30 Rev. 1, September 2012, at http://csrc.nist.gov/publications/nistpubs/800-30-rev1/sp800_30_r1.pdf

NIST (2013) 'Security and Privacy Controls for Federal Information Systems Organizations' National Institute of Standards and Technology, Special Publication 800-53, Revision 4, April 2013, at http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf

OAIC (2013) 'Guide to Information Security' Office of the Australian Information Commissioner, April 2013, at http://www.oaic.gov.au/privacy/privacy-resources/privacy-guides/guide-to-information-security

OECD (2002) 'OECD Guidelines for the Security of Information Systems and Networks: Towards A Culture Of Security' Organisation For Economic Co-Operation And Development, July 2002, at http://www.oecd.org/dataoecd/16/22/15582260.pdf

PCI-DSS (2013) 'Payment Card Industry Data Security Standard' PCI Security Standards Council , v.3.0, November 2013, at https://www.pcisecuritystandards.org/security_standards/pcidss_agreement.php?association=pcidss

RACGP (2013) 'Computer and Information Security Standards' Royal Australian College of General Practitioners, 2nd Edition, June 2013, at http://www.racgp.org.au/your-practice/standards/ciss/

Stephenson N. (1995) 'The Diamond Age' Bantam, 1995

SANS (2014) 'Critical Security Controls: Guidelines' SANS Institute, 2014, at https://www.sans.org/critical-security-controls/guidelines

Turle M. (2009) 'Data security: Past, present and future' Computer Law & Security Review 25, 1 (2009) 51-58

UKICO (2009) 'Privacy impact assessment (PIA) - handbook' Information Commissioner's Office, United Kingdom, June 2009, at http://www.ico.org.uk/pia_handbook_html_v2/files/PIAhandbookV2.pdf

Whitten A. & Tygar J. (1999) 'Why johnny can't encrypt: A usability evaluation of pgp 5.0' Proc. 8th USENIX Security Symposium, vol. 99, 1999, at http://gaudior.net/alma/johnny.pdf

Winterford B. (2013) 'The True Cost of BYOD' itNews, August 2013, at http://www.itnews.com.au/Resource/358142,the-true-cost-of-byod.aspx

Zhong P. (2014) 'Prism-Break' Prism-Break.org, 2014, at http://prism-break.org/en/http://prism-break.org/en/


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in the Cyberspace Law & Policy Centre at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.


Acknowledgements

This work has benefited from feedback from delegates at two presentations, and from collaborative work with Arash Shaghaghi, a PhD candidate in UNSW CSE, in particular in relation to a Working Paper on 'Key Factors in the Limited Adoption of End-User PETs'.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 19 August 2013 - Last Amended: 31 December 2014 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/SSACS.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy