Roger Clarke's Web-Site© Xamax Consultancy Pty Ltd, 1995-2024 |
||||||
HOME | eBusiness |
Information Infrastructure |
Dataveillance & Privacy |
Identity Matters | Other Topics | |
What's New |
Waltzing Matilda | Advanced Site-Search |
Principal, Xamax Consultancy Pty Ltd, Canberra
Visiting Fellow, Department of Computer Science, Australian National University
Version of 9 June 2001
This paper appeared in a Special Issue of the UNSW L. J. 24, 1 (July 2001) 290-297
© Xamax Consultancy Pty Ltd, 2001
Available under an AEShareNet licence or a Creative Commons licence.
This document is at http://www.anu.edu.au/people/Roger.Clarke/DV/eTrust.html
Electronic relationships are only effective, and electronic transactions are only conducted, if the requisite degree of trust exists among the parties. The focus of this article is on the role of privacy in generating trust in cyberspace, and primarily on economic relationships in cyberspace rather than those of a familial or social nature.
For inter-personal communications on the net, trust is achievable. This article commences by reviewing firstly the Internet and its use, and then the nature of trust online, concluding that privacy is a factor necessary to trust in cyberspace. A brisk analysis of privacy risks in cyberspace (and various methods of dealing with them) leads into an argument that, while various methods can be devised to protect privacy and encourage trust between individuals in their online dealings, the minimalist `fair information practices' movement of the last thirty years is utterly inadequate as a basis for providing net-consumers with the privacy they need. The current situation (which has resulted from that movement) therefore prevents individuals from trusting organisations and seriously constrains their preparedness to deal with them electronically. This article concludes that unless organisations establish their trustworthiness with consumers and citizens, the sluggishness in the growth of e-commerce will continue for years to come.
The Internet is a telecommunications network that links other telecommunication networks; its purpose is to enable computers that are attached to any Internet-connected network to communicate with one another. Thus the Internet represents an infrastructure that provides a basis upon which valuable services can be built. Important among these services are those that enable people to send one another messages (for example, via email), or to store information that other people can retrieve (for example, on the World Wide Web).[1]
These and other services together create an `experience space', in which people have a `shared hallucination'. While there is nothing physically `there', if the parties suspend their disbelief, and perceive themselves as having a sufficiently common understanding based on the information they are exchanging, then it seems as if there is, in fact, `something there'. A sci-fi novelist coined the term `cyberspace' as a means of referring not to the underlying inter-networking arrangements of the Internet, nor to the services built upon that infrastructure, but to the virtual experience users share.
Trust can be defined in many ways. In my ongoing research and consultancy in this area, I use the following working definition: trust is confident reliance by one party on the behaviour of other parties.
Trust differs depending on the relationship between the parties. Economic relationships may be direct, as in principal-agent and contractual relationships. In many cases, however, a party may rely on another party despite having no formal relationship with them, or even much knowledge about them. (Examples from cyberspace include unthinking acceptance of the veracity of the contents of an email message or a web page.)
Trust may be relatively unimportant where the risks that the parties are exposed to are limited and the elapsed time during which the exposure exists is quite short, or where the risks are well known but insurance is taken into account in the costs. Where such factors do not exist, trust tends to be crucial for transactions to take place and relationships to develop.[2]
When business interests finally discovered the Internet in the mid-1990s, it was assumed that electronic commerce would explode. It hasn't. The primary reason is that cyberspace evidences many characteristics that render trust very important, and business has signally failed to address the trust gap.
A key reason for trust being a substantially different challenge in cyberspace (in comparison with the physical world) is that the parties have little knowledge about one another, and cannot depend on such confidence-engendering measures as physical proximity, handshakes, body language, a common legal jurisdiction, or even necessarily any definable jurisdiction.[3] A range of measures is needed to inculcate sufficient confidence in Internet users that economic transactions can proceed and relationships can be built online. These measures include the availability of information that can be authenticated, recommendations from trusted parties (as distinct from ersatz, engineered proxies for reputation, such as brand names and `seals of approval'), message and data security, limitation of risk exposure, and other general safeguards against risk. This article focuses on a particularly important factor relevant to encouraging trust online: ensuring users' privacy.
There are many sources of risk in cyberspace, including the other individuals with whom one deals, the providers that services are acquired from, government agencies and corporations that one transacts with, and government regulatory agencies. The nature of the risks faced include monitoring of a person's communications, leakage of information to parties that may seek to exploit it, psychological pressure arising from aggressive communications, the construction of a digital persona that represents an individual, and use of that persona by others to predict and manipulate that individual's behaviour.[4]
However, a variety of risk management approaches are available to users. A principal proactive strategy is avoidance, which can involve declining to use particularly threatening Internet services (such as Microsoft products generally), avoiding central storage of personal data, not divulging sensitive personal data such as contact points and credit card details, and storing sensitive data and performing sensitive procedures on equipment that is not connected to the Internet. Other proactive strategies include deterrence (for example, providing notice to marketing organisations that are suspected of gathering personal data that consent is explicitly denied), and prevention (for example, by implementing counter-measures, such as `cookie' managers and personal `firewalls').[5] Additional approaches are reactive in nature: detection strategies include virus detection software and monitoring of the traffic leaving one's own machine; recovery strategies include virus removal routines; and insurance strategies include backup of personal data complemented by clear plans as to how to recover from an invasion by harmful software.
In some circumstances it may be rational to rely on the non-reactive strategy of risk tolerance: `I don't have the time to consider it, or the money to address it, and if the worst happens, I'll worry about it then'.
Given the privacy risks confronting people in cyberspace, however, caution is generally advisable. Thus a tendency arises among experienced players to adopt a proactive avoidance strategy that includes denying other parties knowledge of one's identity, denying other parties information about oneself generally, and perhaps even to falsify information about oneself.[6] The following sections consider the efficacy of some approaches by individuals and by organisations (including governments and corporations) to privacy protection in fostering trust in cyberspace.
A fundamental approach adopted by individuals to managing privacy risk in cyberspace is to prevent other parties gaining knowledge of one's identity. Since most online transactions involve a succession of messages, it is essential that the anonymous participant be able to be reached by other parties, and be able to associate new messages with old ones. This requires a consistent identity, referred to as a `persistent nym'.[7]
It is also likely that information about the entity behind the nym will be disclosed to other participants, at the very least through the nym's behaviour. Indeed, many social relationships in electronic fora involve what is sometimes referred to as `performance-based reputation'. Nothing is known about the person other than their `track -record' or history in that particular context, yet other members of the forum may be quite trusting of the person behind the nym, unless and until they destroy their own credibility through behaviour or expression inconsistent with the persona they have developed.
In many cases, however, denial of identity protects one party while preventing the other party from developing trust through shared information. An approach that is riskier for the first party, but more conducive to the development of trust, is to use a nym that is traceable but not readily so. Other parties can have some confidence that serious misbehaviour by a person (for example, criminal acts like harassment and fraudulent misbehaviour, and civil wrongs like failure to perform contractual obligations and insolvency) can be addressed by breaking through the protections surrounding the nym and identifying the individual.
The challenge is to find means whereby legal, organisational and technical protections can be breached when conditions demand it, but not breached casually, even by a powerful organisation (such as a government or a large corporation) simply when the organisation believes its interests have been harmed.
The mainstream approach to engendering trust by ensuring privacy has been through the protection of personal data. For three decades, the presumption has been made that the right to privacy, or at least that sub-set of the right to privacy reasonably described as `information privacy' or `data privacy', can be suitably addressed by requiring that practices in relation to the handling of personal information be `fair'.
The `fair information practices' movement originated in American business and government circles in the late 1960s, but flowered in Europe during the 1970s. Substantial bodies of so-called `data protection' laws have developed as a result and are still being refined. The model has been adopted and adapted in many non-European countries, resisted by the United States Federal Government, and bastardised by the Australian Government.
The notion of `fair information practices' has proven to be utterly inadequate, with inadequate scope, manifold exemptions and exceptions, and missing control mechanisms.[8] It has become so engrained, however, that the focus of public policy is very difficult to shift away from the protection of mere data, back to the protection of people's privacy.
In the meantime, organisations continue to enthusiastically develop and implement inherently privacy-invasive technologies, for example, by seeking to impose intrusive online identification, identity-authentication mechanisms,[9] and person location and tracking technologies,[10] including controversial `digital signature' schemes.[11]
For privacy protections to be of any consequence, miscreants must be subject to sanctions. The sanctions need to be commensurate with the gravity of the action taken and the harm caused, and applied in such a manner as to cause a change in behaviour. There is an expectation that `watchdog' agencies will assume this enforcement function, and will audit for compliance, address deficiencies and misbehaviour, and prosecute breaches.
However, many privacy-abusive activities are subject only to organisational self-restraint and industry association codes. So-called `self-regulation' is regarded by the public as completely lacking in credibility. Measures like meta-brands (for example, the `seals of approval' provided by TRUSTe and WebTrust) and privacy statements are repeatedly breached (and seen to be breached) without any action being taken: the undertakings made are therefore nominal, unenforced, and in most cases unenforceable. Self-regulation is seen by the public for what it is: supervision of the sheep by the wolves, for the benefit of the wolves, and a means for business to establish a pretence of regulation in order to hold off actual regulation.
European countries at least have a regulatory framework in place, even though its scope is quite inadequate for the `information age' that was already very much in evidence late last century. Australia, however, is very different. Federal Privacy Commissioners seem to regard their role as restricted to a mere administrator of legislation. They talk pleasantly with the organisations that the public expects them to regulate, and they issue guidelines in relation to Internet usage that actively encourage organisations to invade their employees' privacy in ways that would be illegal if applied to person-to-person conversations and the telephone.
Far from enhancing trust between individuals, and between individuals and organisations, recent Australian legislation (in the form of the Privacy Amendment (Private Sector) Act 2000 (Cth)) has subverted the principles of privacy protection outlined in the 1980 OECD Guidelines Governing the Protection of Privacy and Transborder Flows of Data in order to legitimise a wide variety of privacy-intrusive practices by private sector corporations. This law is an actively `anti-privacy' statute. The Privacy Commissioner's laudable attempt tp interpret the statute broadly enough to overcome some of its weaknesses are unlikely to succeed. It is a serious setback to hopes for privacy generally and for trust in Internet commerce in particular, and it demonstrates the inadequacy of the `fair information practices' movement.[12] At a time when substantial new initiatives are needed, Australia is thirty years behind and going backwards.
Contrary to popular mythology, the United States is the country with the highest level of privacy regulation in the world. However, the relevant legislation comprises large numbers of highly specific statutes, created as `knee-jerk' reactions to particular issues and public concerns. Comprehensive legislation is still being resisted, and, when it comes, will be subject to massive subversion by the corporate interests that fund American politicians. Yet the desperate need for measures that encourage trust in economic uses of cyberspace will eventually force the hand of the U.S. Congress and the President.[13]
Legislatures throughout the world have failed their citizens by providing weak protections for data instead of strong protections for people. In any case, the scope of privacy is far greater than just information privacy. Other dimensions that are of great significance in cyberspace dealings are the privacy of personal behaviour and the privacy of personal communications.
Organisations, and some individuals, are using the potential that Internet technologies provide to abuse these aspects of privacy by submitting users to privacy-invasive measures such as surveillance techniques. Technical devices such as `click-trails', `cookies' and single-pixel images (referred to in the popular literature as `web-bugs') are used to complement simpler ideas like cajoling net-consumers to provide large quantities of personal data in return for very little recompense, and the pooling of behaviour related data among companies.
Social relationships in cyberspace are modestly constrained by trust concerns. Economic relationships between individuals and organisations in cyberspace, on the other hand, are at crisis point. The behaviour of marketers during the closing years of the last century and the beginning of the new one has been so irresponsible that there is very limited trust by people in the actions of companies on the Internet. It will take a long time for trust to be re-built, and many factors will be relevant. But unless that framework features strong and comprehensive privacy laws, and systematic enforcement of those laws, corporations and government agencies will not succeed in stimulating trust in cyberspace commerce.
Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Computer Science at the Australian National University.
Personalia |
Photographs Presentations Videos |
Access Statistics |
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax. From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021. Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer |
Xamax Consultancy Pty Ltd ACN: 002 360 456 78 Sidaway St, Chapman ACT 2611 AUSTRALIA Tel: +61 2 6288 6916 |
Created: 30 March 2001 - Last Amended: 9 June 2001 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/DV/eTrust.html
Mail to Webmaster - © Xamax Consultancy Pty Ltd, 1995-2022 - Privacy Policy