Roger Clarke's Web-Site© Xamax Consultancy Pty Ltd, 1995-2024 |
||||||
HOME | eBusiness |
Information Infrastructure |
Dataveillance & Privacy |
Identity Matters | Other Topics | |
What's New |
Waltzing Matilda | Advanced Site-Search |
Version of 4 March 2016
Published in the Australasian Journal of Information Systems, 20 (September 2016), at http://dx.doi.org/10.3127/ajis.v20i0.1250
© Xamax Consultancy Pty Ltd, 2014-16
Available under an AEShareNet licence or a Creative Commons licence.
This document is at http://www.rogerclarke.com/EC/PBAR.html
The last thirty years of computing has resulted in many people being heavily dependent on digital data. Meanwhile, there has been a significant change in the patterns of data storage and of processing. Despite the many risks involved in data management, there is a dearth of guidance and support for individuals and small organisations that reflects contemporary patterns. A generic risk assessment is presented, resulting in practicable backup plans that are applicable to the needs of those categories of IT user.
Large and medium-sized organisations manage their backup and recovery mechanisms within the context of broader disaster recovery and business continuity planning. So do some proportion of small and micro-organisations and even individuals, where they recognise the importance of their data, and have access to sufficient technical and professional competence. However, in the fourth decade of widespread computer usage, in Australia alone, more than half-a-million small organisations and some millions of consumers are highly dependent upon data, and remain at risk if their data suffers harm.
Yet straightforward guidance on how to address those risks is surprisingly difficult to find. Only a minority of text-books contain segments, in most cases brief, e.g. Boyle & Panko (2013, pp. 487-502). Backup appears to be too prosaic a topic to attract attention from IS researchers. For example, in over 500 articles published in AJIS, not one has the word 'backup' in the title and only 17 (3.5%) even contain the word. In the entire AIS eLibrary of > 30,000 articles, just 2 have the word in the title, and a further 4 in the Abstract. And none of those papers offered any important contribution to the research reported in this paper.
Meanwhile, significant changes have occurred in a variety of areas. Organisational hierarchies have been giving way to networks of smaller entities, with a great deal of activity outsourced. Many workforces have been subject to casualisation. Desktops and laptops have been giving way to handhelds. Organisation-provided devices have been complemented and then to some extent replaced by Bring Your Own Device (BYOD) arrangements. The location of data and application software has switched from the desktop and nearby servers to distant service-providers, initially in a defined physical place but now 'in the cloud' (Clarke 2011).
These changes have brought with them an increased range of threats (e.g. phishing, ransomware) and increased intensity of the threats. The low quality of software has brought with it an increased range of vulnerabilities. Devices are more opaque than before, particularly smartphones and tablets, where Apple has driven the industry away from general-purpose computing devices and towards supplier-controlled and -limited 'appliances'. Most users have little interest in understanding the complexities involved, and limited capacity to comprehend them. Human-computer interfaces increasingly reflect a strong emphasis on hedonism, with convenience and excitement as the primary objectives, and little attention paid to risks. In the new context, a re-visit to the topic of backup is essential.
Large and medium-sized organisations have access to specialist expertise. A proportion of small and micro-organisations also employ information technology (IT) professionals or contract them on an intensive basis. The focus of this research is on individuals, and on micro-organisations and small organisations, that (rightly or wrongly) do not perceive IT and computer-readable data as being central to their work, and that have modest or very limited competence in IT matters.
A particular focus is on individuals who make relatively sophisticated use of computing facilities for such purposes as management of personal finance, tax, pension fund, correspondence, databases of images, videos or audio, or family-trees. The notion of the 'prosumer', coined by Toffler (1970, 1980), has progressively matured (Tapscott & Williams 2006, Clarke 2008). A prosumer is a consumer who is proactive (e.g. is demanding, and expects interactivity with the producer) and/or a producer as well. In the context of computer usage, a third attribute of relevance is professionalism, to some extent of the person themselves but also in relation to their expectation of the quality of the facilities and services they use. A second focus in this work is on very small organisations that involve one or two individuals. Such micro-organisations are within-scope whether they are incorporated or not, whether their activities are stimulated by economic or social motivations, and whether they are for-profit or otherwise. Some categories of small organisation with up to 20 staff or contractors have similar characteristics and needs to micro-organisations.
The relevance of this work extends further, however. During the last two centuries, workers were mostly engaged full-time by organisations under 'contracts of service', and a large body of employment law developed. The last few decades have seen increasing casualisation of workforces, with large numbers of individuals engaged through 'contracts for services'. This requires them to take a far greater degree of self-responsibility. To the extent that large organisations depend on sub-contractors' use of computing and management of data, the sub-contractors' security risks impact upon the organisations that engage them.
The scope of the work is further constrained to 'backup and recovery'. This excludes broader security issues such as firewalls and the encryption of communications, except to the extent that they have a potential impact on the values that the individual perceives in data. The prosaic-sounding and somewhat dated terms 'backup and recovery' have been used intentionally. Larger organisations may benefit from applying the broader concepts of business continuity planning and disaster recovery strategies, whereas the horizon of the majority of small organisations and individuals is far less likely to extend that far.
Considerable changes have occurred in the forms and uses of IT during the last three or four decades. In order to provide guidance in relation to backup arrangements, it is therefore necessary to identify a set of patterns and analyse users' backup requirements under each of them.
As Table 1 depicts, the original pattern was for processing to be performed locally, with the data stored within the device, or on a nearby computing device. Soon afterwards, the alternative emerged of using a storage-device attached to the local area network and commonly referred to as networked-attached storage (NAS). This cluster of patterns is typified as `Self-Sufficiency', in order to convey the substantially independent nature of the person or organisation. The need to maintain `off-site' or `fire' backups was one particular aspect of the approach that posed a challenge to the Self-Sufficiency model.
Short Description | Indicative Timeframe | Location of the Primary Copy | Location of the Backup Copy | User Experience |
Self-Sufficiency | 1980- | Local | Local | Demanding |
Backup Service | 1990- | Local | Remote | A Little Easier |
File-Hosting | 2000- | Remote | Local | Easier Still |
A second pattern quickly emerged, whereby a third party provided support for the functions of creating and managing off-site backups. This is identified in Table 1 using the term `Backup Service'. A third pattern became common, referred to in Table 1 as `File-Hosting'. This arose as the capacity of wide area networks increased and transmission costs decreased. Another important factor was the proliferation of device-types, with desktops and laptops (variously personal and/or employer-provided) complemented by PCs borrowed in Internet cafés, airport lounges and customers' premises, and more recently joined by smartphones and tablets. Increasingly, individuals used multiple devices, and in multiple locations. The use of file-hosting services represents outsourcing of the functions of the NAS to a third party. Users draw copies down when they need them, onto whichever device they are using at the time. Changes made to the users' copies need to be uploaded to the file-host. Where copies of files are maintained permanently on users' devices (e.g. address-books and diaries / calendars), processes variously referred to as replication, mirroring and synchronisation need to be implemented.
A fourth pattern has become increasingly common, indicatively from about 2010 onwards. With the 'File-Hosting' approach, processing continues to be performed locally to the user. The fourth pattern involves the service-provider not only hosting the user's data, but also performing much of and even all of the processing. Currently conventional terms for this include cloud computing and (Application) Software as a Service (SaaS) (Armbrust et al. 2009, Clarke 2011).
The scope of the research project extended to all three of the patterns in Table 1, and to the subsequent fourth pattern involving cloud computing. This paper reports on the outcomes of the research in relation to the first three patterns, with a companion paper (Clarke 2016) developing appropriate backup approaches in the context of SaaS.
The purpose of the research was defined as:
to develop guidance on how small organisations and individuals can use backup techniques to address data risks
Reflecting that purpose, this paper is not addressed exclusively to researchers. The emphasis is on practicality, and the expression is intended to be accessible to professionals as well, with unnecessary intellectualisation of the issues avoided.
The work adopted the design science approach to research (Hevner et al. 2004, Hevner 2007). In terms of the research method described by Peffers et al. (2007), the research's entry-point is `problem-centred'. The process commences by applying risk assessment techniques in order to develop an articulated definition of the problem and of the objectives. An artefact is then designed - in this case a set of backup plans applicable to three patterns of IT use. In terms of Hevner (2007), the article's important contributions are to the requirements phase of the Relevance Cycle, and the Design Cycle, drawing on existing theory in the areas of risk assessment, data management, and data security. It makes more modest contributions to the evaluation phase of the Relevance Cycle, but lays firm foundations for application and field testing.
In this section, the existing literature is applied in order to define terms and specify a process of sufficient simplicity to support the analysis. Within the scope declared in the preceding paragraphs, a target-segment is defined that is both realistic and sufficiently rich to provide both a test of the method and an outcome that is useful in its own right. In section 4, because no suitable risk assessment was located in the literature, the risk assessment process is applied to the test-case, to produce a sufficiently deep understanding of the needs of that category of users. In section 5, practicable backup plans are presented, for each of the three patterns.
The assessment of risk, and the development of guidance for backups, needed to be based on a model of security, and on a set of terms with sufficiently clear definitions. A substantial literature provides the framework for the analysis. This includes OECD (2002), Firesmith (2004) , ISO 27002:2005, IETF (2007), CC (2012, pp. 38-39) and Clarke (2015). Appendix 1 provides a depiction of the conventional computer security model, and a glossary of terms. A brief summary of the model is that:
A suitable backup and recovery plan can only be established if alternative designs are outlined and compared with a set of requirements. A process is needed, comprising a series of steps that apply the conventional security model in order to unfold an understanding of the needs of the entities within the study's scope. The steps declared in Table 2 draw on the conventional security model in Appendix 1 and the discipline and practice of risk assessment and risk management (e.g. ISO 27001:2005, NIST 2012, IASME 2013, Clarke 2013 and Clarke 2015). However, the purpose here is to avoid subtleties and complexities in order to scale the process to the contexts of the target organisations and individuals.
Analyse
(1) Define the Objectives and Constraints
(2) Identify the relevant Stakeholders, Assets, Values and categories
of Harm
(3) Analyse Threats and Vulnerabilities
(4) Identify existing Safeguards (5) Identify and Prioritise the Residual Risks |
Design
(1) Identify alternative Backup and Recovery Designs
(2) Evaluate the alternatives against the Objectives and
Constraints (3) Select a Design (or adapt / refine the alternatives to achieve an acceptable Design) |
Do
(1) Plan the implementation
(2) Implement (3) Review the implementation |
In principle, this process needs to be applied to the specific context facing a particular organisation or individual. In practice, most of the intended clientele would still find the process far too demanding. A practical compromise is to define a small set of categories of client, apply the process to each of these generic categories, test the resulting recommendations in the field, and publish the outcomes to organisations and individuals in the target markets.
Stratification of the broad domains of organisations and individual users could be performed in a variety of ways. Industry sectors have varying needs, and hence analyses could be undertaken for a garden designer, a ceramic artist, a technical writer, a structural engineer, a motor vehicle repairer, a marriage-celebrant, a genealogist, and a cause-based advocate. However, many of these have similar assets and values, are afflicted with similar vulnerabilities, face similar threats, and trade off various factors in similar ways. It may therefore be feasible to define a smaller number of categories than would arise with a sector-based analysis, by focussing on the nature and intensity of the risks that the client faces.
This paper presents a single such test-case. The criteria used in devising it were:
The selected case is a person who is a moderately sophisticated user of computing devices, but has limited professional expertise in information technology matters. They use their computing devices for personal activities and/or in support of one or more organisations. The functions performed are primarily:
The person operates out of a home-office that is equipped with a desktop device. When travelling, the person carries a portable / laptop / clam-shell device. The person has a handheld, and uses this to access messages and send messages using a variety of channels (voice, SMS, email, IM), and to access web-sites. The laptop and handheld may also be used within the home-office.
The person copies files between the desktop and the other devices as needed. The person occasionally remembers to copy the contents of the disk in their desktop out to another disk attached to the same device and plugged into the same power-socket.
Many of the files that the person creates are sent to other people, so if a file is accidentally deleted or damaged, it may be possible to rescue a copy from somewhere else. But the person has experienced several instances in which important files were simply lost, and needs to avoid that happening.
Use of services offered by third parties (Internet Service Providers, ISPs) is within-scope for such mainstream activities as the hosting of email and web-sites. Use of cloud computing, on the other hand, has been excluded from the case, because of the many additional factors this gives rise to. For example, the person might use cloud services for messaging, for storage of the primary copies of photographs, for storage of their address-book and appointments diary, for their log of past dealings with each contact in their address-book, or for their accounting records. Analyses of the risks involved in consumer uses of cloud computing, and approaches to dealing with them, are in Clarke (2011, 2013, 2015).
The test-case excludes circumstances in which the individual is likely to be a specific target for attackers, as distinct from being just another entity subjected to random unguided attacks by malware and social engineering techniques. Hence the trade-offs selected during this analysis are unlikely to be appropriate to, for example, private detectives, and social and political activists who are likely to be directly targeted by opponents and by government agencies.
This section applies the process outlined in Table 2 to the test-case defined immediately above. The discussion in the sections below reflects relevant sources on risk assessment and risk management. Because the test-case excludes individuals likely to be subject to targeted attacks, the analysis pays little attention to countermeasures that may be adopted by attackers to circumvent the individual's safeguards.
This section follows the steps specified in Table 2.
As a reference-point, the following definition is proposed of the individual's purpose and the constraints within which the design needs to work (Clarke 2015):
To avoid, prevent or minimise harm arising from environmental incidents, attacks and accidents, avoiding harm where practicable, and coping with harm when it arises, by balancing the reasonably predictable financial costs and other disbenefits of safeguards against the less predictable and contingent financial costs and other disbenefits arising from security incidents
On the other hand, the target audience needs a simpler formulation, such as:
To achieve reasonable levels of security for reasonable cost
In the case of an individual, the stakeholders comprise the individual themselves, the individual's family, any employees and sub-contractors, and any clients, whether of an economic or a social nature. In the case of small organisations, there may be additional stakeholders, such as employees, customers, suppliers, and perhaps an advisory committee. Also within-scope are some categories of associations with membership and committee structures and small, multi-member enterprises such as investment clubs. In some contexts, regulatory agencies may loom at the level of stakeholder, e.g. for accountants, financial planners, marriage celebrants and health care professionals.
The assets on which this study is focussed are data. A useful resource in this area is ISO 27005 (2012, Annex B). IT equipment and services on which the individual depends are only within-scope to the extent that they play a role in the protection of data assets. Relevant categories of assets are listed in Table 3.
|
The values that stakeholders attribute to Assets derive from a variety of sources (Clarke 2013), in particular:
Values associated with data involve a considerable set of attributes referred to in the literature using various terms, such as 'properties'. One concept of long standing is the 'CIA' list, which stands for Confidentiality, Integrity and Availability (Saltzer & Schroeder 1975). This convention is much-criticised, and many alternatives and adjuncts have been offered. For example, Parker (1998) added Possession, Authenticity and Utility; and Cherdantseva & Hilton (2013) added instead Accountability, Auditability, Authenticity, Trustworthiness, Non-repudiation and Privacy.
However, such lists lack clarity because they confound properties of data with properties of the infrastructure used to achieve access to the data. Particularly for such purposes as backup and recovery strategies, the following areas are only indirectly relevant:
This analysis is concerned specifically with the value attached by stakeholders to data. It accordingly applies the set of factors in Table 4. The primary three values encompass the relevant aspects of the lists referred to in the previous paragraph, but separate out the confusing effects of multiple purposes. The third value is then disaggregrated into its constituent values. This reflects sources on data quality and integrity in the information systems and related literatures, including OECD (1980), Huh et al. (1990), van der Pijl (1994), Clarke (1995, pp. 601-605), Wang & Strong (1996), Müller & Freytag (2003, pp. 8-10), English (2006), Piprani & Ernst (2008) and the ISO 8000 series emergent since 2009.
|
Harm to values in data needs to be considered at two levels. A useful resource in this area is ISO 27005 (2012, Annex B, pp. 39-40). Categories of harm to data itself are listed in Table 5, and the forms of consequential harm to stakeholders' values are listed in Table 6.
|
|
This section follows convention by identifying distinct lists of threats and vulnerabilities, although the distinctions can be challenging to make, and hence it is often more practicable to consider threat-vulnerability combinations. Catalogues of threats and vulnerabilities are available from a variety of sources, most usefully ISO 27005 (2012, pp. 42-49) and NIST (2012, pp. 65-76). These are reflected in Table 7 and Table 8. All elements within these Tables can of course be analysed in greater detail. For example, a deeper treatment of social engineering is in Mitnick & Simon (2003), and of malware is in Clarke (2009).
Environmental Event
Attack
Accident, i.e. Unintentional Error
|
A means is needed to invoke the range of Threats but in a simplified and memorable manner. One such approach is to adopt a single instance of each category of Threats as being representative of the category, and to contrive the first letter of each to build a mnemonic. In this case:
F - Fire (for Environmental Events)
A - Attack
T - Training (for Accidents caused by Humans)
E - Equipment (for Accidents within Infrastructure)
Infrastructural Vulnerabilities
Human Vulnerabilities
|
Before risks can be assessed, it is necessary to take into account factors that already exist that intentionally or incidentally mitigate risks. Common patterns of human behaviour such as habit, caution and loyalty exist, and can be reinforced through training and reminders. Longstanding practices in relation to physical security help as well, such as locks and smoke alarms. Aspects of infrastructure design assist, such as those resulting from contractual terms and 'fitness for use' conditions imposed by the laws of contract and consumer rights. Suppliers have a self-interest in delivering goods and services of reasonable quality, in order to sustain their reputation. Logical security precautions are widely used, particularly in the form of accounting controls. Insurance provides monetary recompense for financial losses, but also imposes requirements for some level of safeguards to be established and maintained.
The final step in the assessment process is the identification and prioritisation of the 'residual risks', i.e. those that are not satisfactorily addressed by existing safeguards. The conventional approach to prioritisation is to assign to each residual risk a severity rating and a probability rating, and to then sort the residual risks into descending order, showing extreme ratings in either category first, followed by high ratings in both, etc. This is most comprehensively presented in NIST (2012, Appendices G, H and I). In principle, these are context-specific judgements that need to be made by the responsible individual, or by someone closely familiar with the individual's needs and circumstances. The analysis conducted here, however, assigns severity and probability ratings on the basis of the test-case described in an earlier section. The results are summarised in Table 9.
Risk | Severity Rating (E, H, M, L) | Probability Rating (H, M, L) |
Storage-Media Failure denying access to all files | Extreme | High |
Environmental Event, Destruction, Theft or Seizure denying access to the Storage-Medium | Extreme | Medium |
Malware or Hacking Attack denying access to all of the data | Extreme | Medium |
Malware or Hacking Attack resulting in inability to access a file | High | Medium |
Mistaken Amendment, Deletion or Overwriting of a file | High | Medium |
Individual File-corruption: • discovered within-cycle • discovered after more backups have been run | High High | Medium Medium |
Environmental Event resulting in inability to access a file | High | Medium |
Software Error resulting in inability to access a file | Medium | Medium |
Unavailability of Networking Facilities resulting in inability to access a file | Medium | Medium |
Technological Change causing a Storage-Medium to be unreadable | Low | Low |
This section considers alternative approaches and then evaluates the possibilities against the requirements defined in the previous section.
A preliminary decision of relevance is what means are used to achieve sufficient synchronisation among the individual's multiple platforms. In a multi-platform environment - in the test-case, desktop, laptop and handheld - there are likely to be multiple copies of at least some files. Clarity is needed as to which file is the primary copy.
A basic arrangement involves the designation of one computing device as the master - typically the desktop - with the other two managed as slaves. The primary copy of the data is on the master-device, and this is mirrored forward from the master to the slaves at convenient times. This is generally done when they have a suitable connection to the master, and there is little or no conflict with other activities, e.g. when all are on the local network overnight. Where one of the slave devices is used to create a new file or amend an existing file (e.g. while the user is away from the office), special care is needed to achieve disciplined backloading to the master-device.
A second arrangement utilises as the primary storage-medium a network-attached storage (NAS) device on the individual's local area network. Any of the computing devices may create new files and amend the primary data, with a file-locking mechanism used to ensure that at any time only one of the computing devices has the capability to amend each file. Where a new or amended file is stored on one of the devices (e.g. because a network connection to the master storage-medium cannot be achieved at the time), special care is needed to achieve disciplined backloading to the master-device. To achieve secure communications from the laptop and handheld while away from the home-office, it is desirable to implement a Virtual Private Network (VPN).
A step beyond NAS is a Redundant Array of Independent Disks Level 1 (RAID1). This is essentially a NAS containing two disks, with all disk activities occurring on both disks. It addresses the risk that one disk will be lost (in particular, because of a disk crash). However, it does not address others among the FATEful risks listed in Table 7.
A further development beyond the use of local NAS is reliance on storage-services provided by another party. Such arrangements are usefully referred to as 'remote file-hosting'. This enables access from multiple devices in multiple locations, and delegates the device management to a specialist organisation, but brings with it a heavy dependency on reliable telecommunications links. Care is still needed to deal with the risks of multiple, inconsistent copies of files. Additional data risks arise in the forms of exposure to second-party access (i.e. by the service-provider), and of more circumstances in which third-party access to or corruption of the data may occur, in particular because of the increased extent of file-transmission over external networks, and the greater attractiveness of service-providers to hackers (the `honeypot effect').
A master-slave arrangement may be easier for a small organisation to understand and manage, whereas storing the primary copy on a NAS or RAID1 device requires additional understanding and infrastructure. The use of a remote file-hosting service requires further understanding, may increase costs, and creates a dependency on a provider, but if the service is well-designed and communicated, reliable and cost-effective, it can relieve the user of a considerable amount of effort.
A range of alternative approaches to backups exists. Drawing on the literature, most usefully Chervenak et al. (1998), plus Lennon (2001), Gallagher (2002), Preston (2007), Strom (2010), TOB (2012) and Cole (2013), Appendix 2 identifies relevant characteristics of backup data, and of backup processes, and Appendix 3 describes each of the various categories of backup procedure.
Key considerations include the frequency with which full backups are performed, whether incremental backups are performed and if so how frequently, whether copies are kept online or offline, and whether second-level archives are kept and if so whether they are later over-written or archived. One of the most critical choices, however, is whether the first-level backup is stored locally or remotely.
In Appendix 4 a summary is provided of the extent to which the various backup techniques address the various risks that afflict both individual files and the primary storage-medium as a whole.
Three patterns of use were outlined in Table 1. To support the design of backup plans, it is necessary to articulate those patterns in somewhat greater detail, as follows:
In order to assess the appropriateness of the various alternative approaches to managing multiple platforms and conducting backups, clarity is needed about the extent to which each alternative addresses the residual risks, and the factors that need to be traded-off against one another when choosing among alternative backup approaches. The key factors are listed in Table 10.
Risk Management • The Risks that are safeguarded against |
Equipment
• Operational Storage Size
• Backup Storage Size
• Processor, Bus and Local Network Capacity • External Network Connection and Capacity |
Operation
• Batch Backup Run-Times
• Recovery Run-Time
• Speed of Recovery from a Security Incident
• Complexity of strategy, plan, policies and procedures • Concentration and Effort needed to implement the plan |
Cost
• One-Time Costs of Safeguards
• Recurrent Costs of Safeguards • Costs of each kind of Security Incident |
The information generated by the preceding sections enables a judgement to be made about what combination of approaches to platform-management and backups is most appropriate. A scheme that would cover every possible eventuality is highly likely to be too complex and too costly for a small organisation or individual. Some degree of compromise is inevitable, trading off primarily cost and complexity, on the one hand, against protections against the lower-priority Threat-Vulnerability Combinations in Table 9. Specific proposals are presented in section 5 below.
An individual or small organisation may be able to directly utilise the outcomes from a generic analysis and design process such as that presented above. The responsibility for converting a risk management strategy to a reality, on the other hand, rests on the individual or organisation concerned. Broadly, the following steps are necessary:
The discussion of analysis, design and implementation in this section has been framed in a sufficiently general manner that individuals and small organisations confronted by a wide variety of circumstances could apply it. The following section presents the outcome applicable to the test-case defined earlier in the paper.
In Table 1 and s.4.2, three patterns of use were distinguished. This section applies the assessment conducted above to each of those three patterns. It presents three Backup Plans that address all of the high-priority threat-vulnerability combinations identified in Table 9, in a manner that is not unduly complex or expensive.
The first Backup Plan, presented in Table 11, applies to circumstances in which the storage-medium to which the backup is performed is on the premises, i.e. co-located with the primary copy. This may be by direct connection to the master-device, typically the desktop, or over a local area network. A review of Wintel-oriented backup software is in Mendelson & Muchmore (2013). An example of a product that satisfies a significant proportion of the requirements is Acronis.
To address on-site risks (typified earlier as FATE - Fire, Attack, Training and Equipment), it is necessary that a second-level backup be maintained at a sufficiently remote location.
Because the test-case encompasses some diversity of needs, a list of essential elements is provided, supplemented by a further set of recommended actions. The actions are further sub-divided into Infrastructure Features, File-Precautions, Backup Runs and Business Processes.
ESSENTIAL
Infrastructure Features
File-Precautions
Backup Runs
Business Processes
RECOMMENDED
Infrastructure Features – Additional Measures
File-Precautions – Additional Measures
Backup Runs – Additional Measures
|
The second Backup Plan, in Table 12, is for circumstances in which the storage-medium to which the backup is performed is located remotely from the primary copy, and the transfer occurs over an Internet connection. The connection preferably uses channel encryption and performs authentication of the remote device. The process can be driven either by a device on the local network - typically the desktop - or by the remote device that has direct access to the backup storage-medium.
The remote backup device may be hosted by someone the individual or organisation has associations with (e.g. a business colleague or a relative). Alternatively, the hosting may be performed by a service-provider, such as an accountant, a local provider of Internet services, a specialist backup provider, or a cloud operator. A commercial catalogue of offerings is in Muchmore (2013). A service that scores well on many aspects of the requirements is SpiderOak, reviewed here.
The majority of the Plan in Table 12 is the same as that for the Self-Sufficiency approach in s.4.1 above. The differences are as follows:
The remote backup device may be hosted by someone the individual or organisation has associations with (e.g. a business colleague or a relative). Alternatively, the hosting may be performed by a service-provider, such as an accountant, a local provider of Internet services, a specialist backup provider, or a cloud operator. A commercial catalogue of offerings is in Muchmore (2013). A service that scores well on many aspects of the requirements is SpiderOak. It is possible to use major service-providers as a Backup Service. Evaluating their offering against these requirements is difficult, however, because reliable information is difficult to find. For example, it appears that the Apple iCloud service synchronises only daily, that recovery may fail if an interruption occurs during the restoration process, and that virtually no warranties or indemnities are provided. It may therefore not be appropriate for either a small organisation or a prosumer to rely on iCloud as a Backup Service in the manner defined here.
ESSENTIAL
Infrastructure Features
File-Precautions
Backup Runs
Business Processes
RECOMMENDED
Infrastructure Features – Additional Measures
File-Precautions – Additional Measures
Backup Runs – Additional Measures
|
The third Backup Plan, in Table 13, applies where the Primary copy of the files is held by another party. This has some similarities to the use of a Backup Service, addressed in the previous section. Key differences are, however, that the use of a Backup Service, by its nature:
In the case of a File-Hosting service, on the other hand:
A great deal of the Plan in Table 13 is the same as that for the Self-Sufficiency approach in s.4.1 above. The differences are as follows:
File-Hosting services have gone through several generations. Initially services were offered by Internet Access Providers, as a form of value-add. Then came consumer-oriented products typified by DropBox (since 2007). A further round has been cloud-based services, typified by Apple iCloud (since 2011) and Google Drive (since 2012). Some are primarily outsourced data-storage services. Others focus on providing their customers with access to their files from multiple devices and from any location, and are sometimes described as 'file-synchronisation' services. Others are primarily to enable files to be provided by one user and made available to others. Yet others are intended to support documents developed collaboratively by multiple people. Some support files generally, and are agnostic about what formats the files are in. Some, however, may use proprietary file-formats, which is hostile to the purpose considered here.
It is a matter of serious concern that large corporations that offer File-Hosting Services generally make very little information available, which makes it very difficult to perform a satisfactory evaluation against the requirements expressed in Table 13. Given that even large organisations generally have far less market power than Apple, Microsoft and Google, small organisations and prosumers that value their data, and that use File-Hosting Services from such corporations, are subject to unmanaged risk exposures.
ESSENTIAL
Infrastructure Features
File-Precautions
Backup Runs
Business Processes
RECOMMENDED
Infrastructure Features – Additional Measures
File-Precautions – Additional Measures
Backup Runs – Additional Measures
|
This paper has presented an analysis of the backup requirements of small organisations and individuals. It has focussed on a test-case, in order to not merely provide general guidance, but also deliver a specification that fulfils the declared objective, including balancing among multiple inherently conflicting needs.
The Peffers et al. (2007) research method was successfully applied, commencing with `problem-centred initiation', through the problem definition, objectives formulation and articulation phases, and into the design phase, resulting in three sets of specifications. Tables 3-9, which declare Assets, forms of Harm, Data Threats, Vulnerabilities and Priority Threat-Vulnerability Combinations, all represent templates or exemplars that can be applied to similar studies of somewhat different contexts.
The project's contributions in relation to the evaluation phase is less substantial. A limited evaluation has been conducted on one of the three Backup Plans. The processes applied by the author for the last decade have been very similar to that derived for the Self-Sufficiency pattern in Table 11. The backup procedures have been exercised several hundred times, and the recovery procedures on a modest number of occasions. The author has suffered very few losses of datafiles. The rare exceptions are of two kinds. A few very old files have been discovered to be corrupted (by runs of disk utility software) only after all still-readable backups are similarly corrupted. A somewhat larger number of files (from the period 1984-1992) are no longer accessible because no device is available that can read the storage-medium and./or because no application software is available that can read the data-format. A review of the author's procedures in light of the analysis reported here highlighted the need for refinements to the author's procedures, and for more assiduous application of them, particularly relating to the periodic rehearsal of recovery processes, and the migration of copies forward from obsolescent to contemporary storage-media.
The research has laid a firm foundation for IS professionals to better address the needs of small organisations and individuals. The specific Backup Plans proposed above can be used as a basis for evaluating the capabilities of software products that support local backup management, and for evaluating backup services offered by ISPs. They, and variants of and successors to them, are capable of being productised by providers. These include corporations that sell hardware, that sell operating systems, that sell pre-configured hardware and software, that sell value-added hardware and software installations, that sell storage-devices, and that sell storage services. A further opportunity is for guidance based on them to be distributed by industry associations, user associations and clubs, to assist those organisations' members.
The research has also contributed to the accumulated body of theory about data management and data security. In order for the outcomes to be exploited, it is necessary for the analysis to be subjected to review by peers, and the feedback reflected in the artefacts through the publication of revised versions. The analysis may require adaptation, at least of terminology, in order to be readily applied to specific technology-contexts, such as Microsoft, Apple OSX and iOS, and Linux and Android operating environments, and particularly where the individual uses multiple such platforms. The analysis needs to be applied to additional test-cases, reflecting the needs of small organisations and individuals whose characteristics are materially different from those addressed by this paper.
Beyond analytical review, the three specific Backup Plans derived from the analysis need to be applied, and their effectiveness and practicality evaluated empirically. The analysis also needs to be applied in circumstances in which the individual accepts (but manages) the additional risks involved in relying entirely on networks and remote services - with all the uncertainties of format-compatibility and geographical and jurisdictional location that the cloud entails. Those circumstances are addressed in a companion paper (Clarke 2016). Both analyses may require further adaptation if and when the target market-segment's usage of general-purpose computing devices (such as desktops and laptops) declines, and datafile creation and amendment comes to be undertaken almost entirely on locked-down appliances (such as smartphones and handhelds).
As individuals increasingly act as prosumers, they become more demanding, and more aware of the benefits of effective but practical backup arrangements. Meanwhile, many large organisations are becoming concerned about importing subcontractors' security risks. They can be expected to bring pressure to bear on small organisations and individuals to demonstrate the appropriateness of their backup plans, and to provide warranties and indemnities in relation to them. The work reported here accordingly lays a foundation for significant improvements in key aspects of the data security not only of individuals and small organisations, but also of the larger organisations that depend on them.
In order to address the residual risks confronting an entity, it is necessary to devise a Backup Plan that comprises a suitable selection from among the following procedures:
Context:
Process, undertaken prior to any amendment:
Attributes:
Context:
Process, undertaken periodically:
Attributes:
Context:
Process, undertaken periodically:
Attributes:
Context:
Process, undertaken periodically:
Attributes:
This is a variant of an Incremental Backup, whereby files that have been deleted since the last Full Backup was done are deleted from the Full Backup.
This saves space on the storage-medium, but mistaken deletions may not be recoverable.
Context:
Process, undertaken periodically:
Attributes:
Context:
Process, undertaken periodically:
Attributes:
Context:
Process, undertaken periodically:
Attributes:
Context:
Process, undertaken continuously:
Attributes:
Context:
Process, undertaken periodically:
Attributes:
Context:
Process, undertaken periodically:
Attributes:
Context:
Process, undertaken periodically:
Attributes:
Risk | Relevant Backup Techniques |
Risks relating to Individual Files | |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Risks relating to Storage-Media | |
|
|
|
|
|
|
|
|
|
|
Armbrust M., Fox A., Griffith R., Joseph A.D., Katz R., Konwinski A., Lee H., Patterson D., Rabkin A., Stoica I. & Zaharia M. (2009) 'Above the Clouds: A Berkeley View of Cloud Computing' Technical Report No. UCB/EECS-2009-28, UC Berkeley Reliable Adaptive Distributed Systems Laboratory, February, 2009, at http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.pdf
Boyle R.J. & Panko R.R. (2013) 'Corporate Computer Security' Pearson, 3rd Ed., 2013
CC (2012) 'Common Criteria for Information Technology Security Evaluation - Part 1: Introduction and general model' Common Criteria, CCMB-2012-09-001, Version 3.1, Revision 4, September 2012, at http://www.commoncriteriaportal.org/files/ccfiles/CCPART1V3.1R4.pdf
Cherdantseva Y. & Hilton J. (2012) 'A Reference Model of Information Assurance & Security' Proc. IEEE ARES 2013 SecOnt workshop, 2-6 September, 2013, Regensburg, at http://users.cs.cf.ac.uk/Y.V.Cherdantseva/RMIAS.pdf
Chervenak A. L., Vellanki V. & Kurmas Z. (1998) 'Protecting file systems: A survey of backup techniques' Proc. Joint NASA and IEEE Mass Storage Conference, March 1998, at http://www.storageconference.us/1998/papers/a1-2-CHERVE.pdf
Clarke R. (1995) 'A Normative Regulatory Framework for Computer Matching' J. of Computer & Info. L. 13,3 (June 1995), PrePrint at http://www.rogerclarke.com/DV/MatchFrame.html
Clarke R. (2008) 'B2C Distrust Factors in the Prosumer Era' Invited Keynote, Proc. CollECTeR Iberoamerica, Madrid, 25-28 June 2008, pp. 1-12, at http://www.rogerclarke.com/EC/Collecter08.html
Clarke R. (2009) 'Categories of Malware ' Xamax Consultancy Pty Ltd, September 2009, at http://www.rogerclarke.com/II/MalCat-0909.html
Clarke R. (2011) 'The Cloudy Future of Consumer Computing' Proc. 24th Bled eConference, June 2011, PrePrint at http://www.rogerclarke.com/EC/CCC.html
Clarke R. (2013) 'Data Risks in the Cloud' Journal of Theoretical and Applied Electronic Commerce Research (JTAER) 8, 3 (December 2013) 59-73, Preprint at http://www.rogerclarke.com/II/DRC.html
Clarke R. (2015) 'The Prospects of Easier Security for SMEs and Consumers' Computer Law & Security Review 31, 4 (August 2015) 538-552, PrePrint at http://www.rogerclarke.com/EC/SSACS.html
Clarke R. (2016) 'Backup and the Cloud: Survival Strategies for Users Dependent on Service-Providers' Xamax Consultancy Pty Ltd, February 2016, at http://www.rogerclarke.com/EC/PBAR-SP.html
Cole E. (2013) 'Personal Backup and Recovery' Sans Institute, September 2013, at http://www.securingthehuman.org/newsletters/ouch/issues/OUCH-201309_en.pdf
English L.P. (2006) 'To a High IQ! Information Content Quality: Assessing the Quality of the Information Product' IDQ Newsletter 2, 3, July 2006, at http://iaidq.org/publications/doc2/english-2006-07.shtml
Firesmith D. (2004) 'Specifying Reusable Security Requirements' Journal of Object Technology 3, 1 (Jan-Feb 2004) 61-75, at http://www.jot.fm/issues/issue_2004_01/column6
Gallagher M.J. (2002) 'Centralized Backups' SANS Institute, July 2001, at http://www.sans.org/reading-room/whitepapers/backup/centralized-backups-513
Hevner A.R. (2007) 'A Three Cycle View of Design Science Research' Scandinavian Journal of Information Systems, 2007, 19(2):87-92
Hevner A.R., March S.T. & Park, J. (2004) 'Design research in information systems research' MIS Quarterly, 28, 1 (2004), 75-105
Huh Y.U., Keller F.R., Redman T.C. & Watkins A.R. (1990) 'Data Quality' Information and Software Technology 32, 8 (1990) 559-565
IASME (2013) 'Information Assurance For Small And Medium Sized Enterprises' IASME Standard v. 2.3, March 2013, at https://www.iasme.co.uk/images/docs/IASME%20Standard%202.3.pdfhttps://www.iasme.co.uk/images/docs/IASME%20Standard%202.3.pdf
IETF (2007) 'Internet Security Glossary' Internet Engineering Task Force, RFC 4949, Version 2, August 2007, at https://tools.ietf.org/html/rfc4949
ISO 27005 (2012) 'Information Technology - Security Techniques - Information Security Risk Management' International Standards Organisation, 2012
Lennon S. (2001) 'Backup Rotations - A Final Defense' SANS Institute, August 2001, at http://www.sans.org/reading-room/whitepapers/sysadmin/backup-rotations-final-defense-305
Mendelson E. & Muchmore M. (2013) 'The Best Backup Software' PCMag Australia, 28 March 2013, at http://au.pcmag.com/backup-products/9607/feature/the-best-backup-software
Mitnick K.D. & Simon W.L. (2003) 'The Art of Deception: Controlling the Human Element of Security' Wiley, 2003
Muchmore M. (2013) 'Disaster-Proof Your Data with Online Backup' PCMag Australia, 30 March 2013, at http://au.pcmag.com/backup-products-1/9603/feature/disaster-proof-your-data-with-online-backup
Müller H. & Freytag J.-C. (2003) 'Problems, Methods and Challenges in Comprehensive Data Cleansing' Technical Report HUB-IB-164, Humboldt-Universität zu Berlin, Institut für Informatik, 2003, at http://www.informatik.uni-jena.de/dbis/lehre/ss2005/sem_dwh/lit/MuFr03.pdf
NIST (2012) 'Guide for Conducting Risk Assessments' National Institute of Standards and Technology, Special Publication SP 800-30 Rev. 1, September 2012, at http://csrc.nist.gov/publications/nistpubs/800-30-rev1/sp800_30_r1.pdf
OECD (1980) 'Guidelines on the Protection of Privacy and Transborder Flows of Personal Data' OECD, Paris, 1980, mirrored at http://www.rogerclarke.com/DV/OECDPs.html
OECD (2002) 'OECD Guidelines for the Security of Information Systems and Networks: Towards A Culture Of Security' Organisation For Economic Co-Operation And Development, July 2002, at http://www.oecd.org/dataoecd/16/22/15582260.pdf
Parker D.B. (1998) 'Fighting Computer Crime' John Wiley & Sons, 1998
Peffers K., Tuunanen T., Rothenberger M.A. & Chatterjee S. (2007) 'A Design Science Research Methodology for Information Systems Research' Journal of Management Information Systems 24, 3 (Winter 2007-8) 45-77
van der Pijl G. (1994) 'Measuring the strategic dimensions of the quality of information' Journal of Strategic Information Systems 3, 3 (1994) 179-190
Piprani B. & Ernst D. (2008) 'A Model for Data Quality Assessment' Proc. OTM Workshops (5333) 2008, pp 750-759
Preston W.C. (2007) 'Backup & Recovery' O'Reilly Media, 2007
Saltzer J. & Schroeder M. (1975) 'The protection of information in computer systems' Proc. IEEE 63, 9 (1975), pp. 1278-1308
Strom S. (2010) 'Online Backup: Worth the Risk?' SANS Institute, May 2010, at http://www.sans.org/reading-room/whitepapers/backup/online-backup-worth-risk-33363
Tapscott D. & Williams A.D. (2006) 'Wikinomics: How Mass Collaboration Changes Everything' Portfolio, 2006TOB (2012) 'Types of Backup' typesofbackup.com, June 2012, at typesofbackup.com
Toffler A. (1970) 'Future Shock' Pan, 1970
Toffler A. (1980) 'The Third Wave' Pan, 1980
Wang R.Y. & Strong D.M. (1996) 'Beyond Accuracy: What Data Quality Means to Data Consumers' Journal of Management Information Systems 12, 4 (Spring, 1996) 5-33
The assistance of Russell Clarke is gratefully acknowledged, in relation to conception, detailed design and implementation of backup and recovery arrangements for the author's business and personal needs, and for review of a draft of this paper.
Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in the Cyberspace Law & Policy Centre at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.
Personalia |
Photographs Presentations Videos |
Access Statistics |
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax. From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021. Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer |
Xamax Consultancy Pty Ltd ACN: 002 360 456 78 Sidaway St, Chapman ACT 2611 AUSTRALIA Tel: +61 2 6288 6916 |
Created: 28 August 2014 - Last Amended: 4 March 2016 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/PBAR.html
Mail to Webmaster - © Xamax Consultancy Pty Ltd, 1995-2022 - Privacy Policy