Roger Clarke's Web-Site
© Xamax Consultancy Pty Ltd, 1995-2017
|Identity Matters||Other Topics||Waltzing Matilda||What's New|
Principal, Xamax Consultancy Pty Ltd, Canberra
Visiting Fellow, Department of Computer Science, Australian National University
Version of 30 April 1999
© Xamax Consultancy Pty Ltd, 1999
This article was prepared for presentation at the User Identification & Privacy Protection Conference, Stockholm, 14-15 June 1999
An Extended Abstract of this paper is at http://www.rogerclarke.com/DV/UIPP99EA.html
This document is at http://www.rogerclarke.com/DV/UIPP99.html
When designing their information systems, organisations are commonly assumed to be confronted with a stark choice between identification of their clients on the one hand and anonymity on the other. Identification assures accountability but threatens privacy; whereas anonymity protects privacy but undermines accountability.
This article identifies and explains the spectrum of intermediate alternatives, which involve pseudonymity and varying degrees of authentication. The selection of appropriate alternatives, in most circumstances intermediate between the extremes of identification and anonymity, is argued to be critical to the establishment of trust in electronic commerce, and to the maintenance of social and democratic freedoms.
This paper addresses several related questions. Under what circumstances is it appropriate for a party to a transaction to know the identity of the other party? What degree of confidence does a party need as to whether that identity is correct? What is the scope for transactions and relationships to be anonymous or pseudonymous, rather than identified?
Discussions about this topic frequently embody the presumption that a stark choice exists between accountable identification and irresponsible anonymity. In fact, a rich spectrum of alternatives exists. This paper's purpose is to present that range of alternatives, and to argue for much better informed and more careful selection among them.
This article commences by arguing that this issue is vital to the survival of civic freedoms. It introduces and defines the relevant concepts, and describes common techniques. It then identifies the spectrum of approaches that is available to organisations when they design their information systems, and suggests how organisations can determine which of the spectrum of possibilities is appropriate in their particular circumstances. The concepts are then applied to the specific context of the Internet.
The power of modern information technology, combined with the intensity of personal data held by organisations, represents a dire threat to personal freedoms. Clarke (1988) argued that the sole remaining barrier against control by the State was difficulties with the consistent identification of people. During the decade since then, many attempts have been made to establish national identification schemes, to extend existing multiple-purpose schemes, and to correlate among multiple identifications schemes. As the public has increasingly appreciated the dangers, these attempts have met considerable resistance (PI 1997).
The intensity of data collections about people has been increasing significantly (Clarke 1996a). There has also been an increasing tendency for organisations to try to convert anonymous transactions (such as visits to counters, telephone enquiries and low-value payments) into identified transactions. The reasons for this include:
The privacy interest runs emphatically counter to these attempts to increase data collections, to convert identified into anonymous transactions, to correlate identifiers, to impose general-purpose identifiers, and generally to apply industrial control techniques to the control of people.
Since 1970, legislatures have responded to the threats to privacy by creating data protection regimes. These are, however, based on a model developed in the 1960s and 1970s, which is commonly referred to as 'Fair Information Practices'. This approach has maximised convenience to government and business, and has signally failed to provide protection against rampant information-intensive procedures, data-collection and data-sharing (Clarke 1999b). Of the many aspects of privacy protection that are at present inadequately addressed, I argued in Clarke (1999c) that the most critical policy imperatives for the coming century are the following:
The first of these imperatives denies the increasingly common assumption by the State and corporations that individuals are to be subjected to the same efficiency-engendering technologies as are appropriate to manufactured objects. The second denies the State and corporations alike the ability to consolidate personal data through the use of common identifiers. It further stresses the central importance of pseudonymity as a primary means of achieving the necessary balance between the needs for privacy and for accountability.
This section identifies and defines each of the concepts used in the analysis, and relates this author's definitions to those used by other writers.
Identification is a process whereby a real-world entity is recognised, and its 'identity' established. Real-world entities include industrial equipment (like lifts and elevators, trucks and cranes); consumer durables (like ovens and refrigerators); packages, parcels and pallets; livestock and pets; organisations; and people. The focus in this article is on organisations and people.
Dictionary definitions of the concept of identity are awkward (e.g. "the condition of being a specified person" - Oxford, and "the condition of being oneself ... and not another" - Macquarie). The notion is operationalised in the abstract world of information systems as a set of information about an entity that differentiates it from other, similar entities. The set of information may be as small as a single code, specifically designed as an identifier, or may be a compound of such data as given and family name, date-of-birth and postcode of residence. An organisation's identification process comprises the acquisition of the relevant identifying information.
Contrary to the presumptions made in many information systems, an entity does not necessarily have a single identity, but may have multiple identities. For example, a company may have many business units, divisions, branches, trading-names, trademarks and brandnames. And many people are known by different names in different contexts. In some cases, the intention is dishonourable or criminal; but in most cases the adoption of multiple personae is neither, but rather reflects the multiple roles people play in such contexts as their family, their workplace(s), their profession, community service and art (Clarke 1994a). The laws of countries such as Britain and Australia in no way preclude such multiple identities (or aliases, or aka's - for 'also known as'). An act of fraud that depends on the use of multiple or situation-specific identities is, on the other hand, a criminal offence.
The identification of humans is comprehensively addressed in (Clarke 1994b), and that of organisations in Clarke (1999f).
Authentication is a general term referring to the process whereby a degree of confidence is established about the truth of an assertion.
A common application of the idea is to the authentication of identity (Clarke 1995, 1996e). This is the process whereby an organisation establishes that a party it is dealing with is:
The nature of identity, identifiers, and identification processes is such that authentication is never perfect, but rather is more or less reliable. It is useful to distinguish degrees of assurance about identity. High-reliability authentication processes are generally costly to all parties concerned, in terms of monetary value, time, convenience and intrusiveness. Organisations generally select a trade-off between the various costs, reliability, and acceptability to the affected individuals.
Although the term 'authentication' is often used as though it related only to identity, it is a much more general concept. There are many circumstances in which organisations undertake authentication of value, e.g. by checking a banknote for forgery-resistant features like metal wires or holograms, and seeking pre-authorisation of credit-card payments.
Another approach is the authentication of attributes, credentials or eligibility. In this case, it is not the person's identity that is in focus, but rather the capacity of that person to perform some function, such as being granted a discount applicable only to tradesmen or club-members, or a concessional fee only available to senior citizens or school-children, or entry to premises that are restricted to adults only.
A further, emergent challenge is the delegation of powers to artificial intelligences or software agents. This already occurs in automated telephone, fax and email response; automated re-ordering; and 'program trading'. Subject to some qualifications, legislatures and courts may be becoming willing to accept these acts as being binding on the entity concerned, at least under some circumstances. (A counter-example is, however, the requirement embodied in Article 15 of the European Directive of 1995 that decisions adverse to a person must be reviewed by a human agent prior to being implemented).
This section builds on the concepts introduced above, in order to distinguish between identified, anonymous and pseudonymous transactions. Based on the above discussion:
An identified record or transaction is one in which the data can be readily related to a particular individual. This may be because it carries a direct identifier of the person concerned, or because it contains data which, in combination with other available data, links the data to a particular person.
At the other extremity from identification is the negation of identification in transaction details:
An anonymous record or transaction is one whose data cannot be associated with a particular individual, either from the data itself, or by combining the transaction with other data.
A great many transactions that people undertake are anonymous, including:
Some of the reasons that people use anonymity are of dubious social value, such as avoiding detection of their whereabouts in order to escape responsibilities. Other reasons are of arguably significant social value, such as avoiding physical harm, enabling 'whistle-blowing', avoiding unwanted and unjustified public exposure, and keeping personal data out of the hands of intrusive marketers and governments.
Some categories of transactions, however, are difficult to conduct on an anonymous basis, without one or perhaps both of the parties being known to the other. Examples of transactions where an argument for identification may be tenable include:
Even in some of these circumstances, however, designs, prototypes and even operational schemes exist, that enable protection of the parties' interests without disclosure of the person's identity. See, for example, the anonymous credit-card scheme of Low et al. (1996).
Beyond anonymous and identified transactions, an additional alternative exists:
A pseudonymous record or transaction is one that cannot, in the normal course of events, be associated with a particular individual.
Hence a transaction is pseudonymous in relation to a particular party if the transaction data contains no direct identifier for that party, and can only be related to them in the event that a very specific piece of additional data is associated with it. The data may, however, be indirectly associated with the person, if particular procedures are followed, e.g. the issuing of a search warrant authorising access to an otherwise closed index.
To be effective, pseudonymous mechanisms must involve legal, organisational and technical protections, such that the link can only be made (e.g. the index can only be accessed) under appropriate circumstances.
Pseudonymity is used in some situations to enable conflicting interests to be satisfied; for example in collections of highly sensitive personal data such as that used in research into HIV/AIDS transmission. It is capable of being applied in a great many more situations than it is at present.
Generalising from this example, pseudonymity is used to enable the protection of individuals who are at risk of undue embarrassment or physical harm. Categories of such people include 'celebrities' and 'VIPs' (who are subject to widespread but excessive interest among sections of the media and the general public, including 'stalkers'), 'battered wives', protected witnesses, and people in security-sensitive occupations.
Another application of pseudonymity is to reflect the various roles that people play. For example, on any one day, a person may act as their private selves, as an employee of an organisation, as an officer of a professional association, and as an officer of a community organisation. In addition, a person may have multiple organisational roles (e.g. substantive position, acting position, various roles on projects and cross-organisational committees, bank signatory, first-aid officer and fire warden), and multiple personal roles (e.g. parent, child, spouse, scoutmaster, sporting team-coach, participant in professional and community committees, writer of letters-to-the-newspaper-editor, chess-player, and participant in newsgroups, e-lists, chat-channels).
In the case of identified transactions, the parties to it are persons. With anonymous and pseudonymous transactions, the party has an unknown, and even unknowable, person behind it. A term is needed to refer to such parties.
One possibility is 'pseudo-identifier'. In Clarke (1993), the term 'digital persona' was coined for a purpose very similar to this, and in Clarke (1994a) it was defined as "a model of an individual's public personality, based on data, maintained by transactions, and intended for use as a proxy for the individual". At about the same time, the term 'e-pers' (an abbreviation of electronic persona) was suggested. During the last couple of years, the term 'nym' has been in use to refer to a pseudo-identity that arises from anonymous and pseudonymous dealings (e.g. McCullagh 1998-).
The relationship between a nym and the underlying entity may be 1:1, or an entity may have many nyms (1:n). In addition, allowance must be made for many entities to use the same nym (n:1).
Technologies exist and are being developed to support multiple digital personae for a single entity. For example, a very primitive one is the concept of a 'profile' supported by Version 4 of Netscape Navigator. More sophisticated technologies are identified at Clarke (1999c), Clarke (1999d) and Clarke (1999e). Under such arrangements, a person sustains separate relationships with multiple organisations, using separate identifiers, and generating separate data trails. These can be designed to be very difficult to link; or they can be designed to provide a mechanism whereby the veil of pseudonymity can be lifted, subject to appropriate conditions being fulfilled.
In addition, a person may be able to establish multiple relationships with the same organisation, with a separate digital persona for each relationship. This may be to reflect the various roles the person plays when it interacts with that organisation (e.g. contractor, beneficiary, share-holder, customer, lobbyist, debtor, creditor). Alternatively, it may merely be to put at rest the minds of people who are highly nervous about the power of organisations to bring pressure to bear on them.
In the contemporary contexts of highly data-intensive relationships, and Internet-mediated communications, pseudonymity and multiple nyms are especially important facets of human identification and information privacy.
The definitions provided in this section have been used by this author (with variations) since 1993. Many other authors are non-specific in their usage of the various terms, sometimes appearing not to appreciate the distinction drawn by this author between anonymity and pseudonymity. In other cases, the term 'anonymity' is used generically to refer to both what is called here 'anonymity' and what is called here 'pseudonymity' (leaving no term to describe 'anonymity').
Some other authors are specific about their meanings, but their usage differs from that adopted by this author. Froomkin (1995) uses untraceable anonymity' for what I define as 'anonymity', and distinguishes 'traceable anonymity' where a person's identity is known but their profile is not (which I would call 'pseudonymity without an associated profile'). In addition, Froomkin uses 'traceable pseudonymity' for what I call 'pseudonymity', and coins 'untraceable pseudonymity' for what I would refer to as 'anonymity using a persistent nym'. Neumann (1996) uses the term 'pseudo-anonymity' for what I call 'pseudonymity'. He applies 'pseudonymity' to a situation in which multiple nyms may be used by the same person, where each nym may be (in my terms) either anonymous or pseudonymous.
I suggest that the Froomkin and Neumann terminologies are less straightforward and useful than those in this paper and its predecessors.
This section outlines methods that are commonly used in authentication, anonymity and pseudonymity. A later part of the paper focuses on the Internet specifically. This section, on the other hand, is generic. It is divided into segments dealing with each of the circumstances discussed above:
Historically, a person has been identified in a variety of ways, and authentication has involved some additional procedures. Some examples of practical approaches are:
It is important to appreciate that a foolproof scheme cannot be devised, that all schemes depend on tokens that are subject to the 'entry-point paradox' (Clarke 1994b), and that the term 'proof of identification' is seriously misleading and should be avoided in favour of the term 'evidence of identity'.
Hand-written signatures are a low quality basis for authentication, because they are very easily 'spoofed' (i.e. a falsified signature used in order to create the impression of a valid one) and very easily 'repudiated' (i.e. the validity of a signature unreasonably denied). The generic concept of an 'electronic signature' may be capable of delivering far greater quality than hand-written signatures (though it also is likely to be imperfect). The strongest such contender at present is a so-called 'digital signature', based on public key cryptographic techniques.
For a digital signature to be of high quality (i.e. not readily subject to spoofing and repudiation), it needs to be generated using a 'private key' that is held under highly secure conditions by the person concerned, with trustworthy 'certification authorities' issuing certificates vouching for the association between the person and their 'public key', and with the onus of proof, and the assignment of risk in the event of error, made clear through statute law and/or contract. In order to support this complex arrangement, Public Key Infrastructure (PKI) is having to be established (Clarke 1996b).
The private key is long, and it is impractical for it to be memorised in the way that passwords and PINs are meant to be memorised. There therefore has to be a means whereby the private key is stored; and it has to be secure.
The currently most practical device to support storage of a private key is a chip-card. The private key needs to be protected in some manner, such that only the owner can use it. One approach is to protect it with a PIN or password (which would, of course, undermine the presumed security of the long private key). An emergent approach is for the card itself to refuse access to the private key, except when the card measures some aspect of the holder's physical person, and is satisfied that it corresponds sufficiently closely (using 'fuzzy matching') to the measure pre-stored in the card. Examples of such 'biometrics' include the patterns formed by rods and cones on the retina, and the geometry of the thumb.
The imposition of requirements in relation to authentication by digital signature involves substantial privacy-invasiveness. This was analysed in Greenleaf & Clarke (1997). Policy requirements for privacy-sensitive smart-card applications are expressed in Clarke (1997e) and Clarke (1998b), and for privacy-sensitive PKI in Clarke (1998c).
In conventional business, a range of techniques has been used to check that an act has purports to have been performed by a particular business entity actually has been. Letterhead, and call-back to a number acquired from some other source, are common approaches. The nominally highest-quality authentication of a corporation's identity and action has been where the company's seal has been affixed to a document, and over-signed by authorised officers. This is actually of very low quality, because both the seal and the signatures are easy to spoof, and difficult to verify. With the emergence of electronic commerce during the last quarter-century, the requirements for use of a company seal are in the process of being rescinded (and the requisite amendments have already been made to the Australian corporations law).
Electronic signatures in general, and digital signatures in particular, offer the prospect of far higher levels of confidence. A significant difficulty that has to be addressed, however, is that, because a business entity cannot itself act, it is dependent on the actions of one or more humans acting on its behalf. In addition to the security measures needed in respect of a person's own digital signature, further measures are needed, in order to reduce the likelihood of error or fraud by one or more persons, involving misapplication of the business entity's private key.
A particular challenge that organisations need to cope with is agents acting on behalf of a principal. Agents may be organisations or individuals. In many cases there will be a straightforward principal-agent relationship, but in some circumstances there may be a chain of agency relationships, such as a legal firm acting for a regional distributor representing the interests of a remote manufacturer; or an independent assessor acting for an insurer representing a claimant. The final step in the chain of relationships will generally be a natural person, and the scope of that person's powers may be in doubt. For example, a particular employee of a customs agency may be authorised to act on behalf of many of that company's clients, but not the particular client in question.
The claims of a business intermediary to be acting on behalf of another intermediary need to be subjected to testing. Moreover, the claims of a person to be acting on behalf of a business entity (which may itself be acting as an intermediary for another business entity) also need to be tested. Authentication needs to be undertaken of a particular attribute or credential that reflects the agency relationship, such as a power of attorney, or some other form of delegation of power to sign contracts.
Analogous arrangements have been envisaged for the electronic context, applying cryptographic techniques. One approach that can be used is to authenticate the identity of the individual and/or business entity (as discussed in the preceding sub-sections), and then check some kind of register of identities authorised to act on behalf of the relevant business entity. This presumes that a sufficiently secure infrastructure exists whereby that authorisation register can be established, maintained, and accessed. However, the approach has the advantage of building on the (currently emergent) public key authentication infrastructure. The register might even be implemented in distributed fashion, by setting an indicator within the person's own signature chip-card.
Another approach is direct authentication of an authorisation. For example, a business entity's private key could be used to digitally sign a particular kind of instrument, which a recipient could confirm (using the business entity's widely available public key). This would be a more direct mechanism, and would avoid unnecessary declaration and authentication of the identity of the agent. It would, on the other hand, involve some risk of appropriation or theft of what amounts to a bearer instrument.
It is often presumed that an organisation faces a simple choice between a transaction in which the parties are identified, or one in which the parties are anonymous. Recently, an increasing number of authors have begun to recognise that pseudonymity is feasible.
In fact, there is a substantial spectrum of possibilities that arise from the combination of identification, anonymity and pseudonymity on the one hand, and authentication on the other. The spectrum comprises:
A variety of techniques is available to supplement or substitute for assurance in relation to identification. In particular, the organisation can protect its interests through authenticating value, or authenticating the party's attributes or credentials. These can be performed in addition to, or instead of, authentication of the identity or nym.
In addition, authentication of a persistent nym can be an effective means of establishing confidence that a series of communications are with the same person, even though the identity of the person is not reliably known.
Technical means that are designed to support anonymity are referred to as Privacy-Enhancing Technologies (PETs). See IPCR (1995) and Burkert (1997). By design, PETs are privacy-protective, but deny the most direct form of accountability: the knowledge that the perpetrator's identity can become known, and hence that retribution is feasible. Pseudonymity-supporting technical measures are referred to by this author as Privacy-Sympathetic Technologies (PSTs). They protect privacy, but also support the direct form of accountability, because the veil can be lifted. These aspects are discussed in greater depth at Clarke (1999c).
Organisations have for too long blundered into identified schemes when this did not reflect the real needs. They need to appreciate the range of alternatives available to them. In order to make a rational choice among them, they need to consider a number of factors.
A first matter of importance is the setting within which the particular transactions take place, and the functional requirements of the particular information system. Within the context defined by the requirements and setting, a risk analysis needs to be conducted, to determine what the harm will be if identities are not gathered, or are inaccurate. It is important to distinguish between military-style notions of 'absolute assurance' and the conventional approach adopted in business and government. This ignores absolute assurance as being unattainable, and in any case extraordinarily expensive and intrusive. Businesses apply risk-management techniques, identifying categories of risk that justify incurring costs, and others that can be satisfactorily addressed by merely monitoring, tolerating and insuring against them. Authentication needs to provide a degree of confidence about identity and/or authorisation, commensurate with the risks involved in them being incorrect or falsified, and balanced against the costs, time, convenience and intrusiveness involved.
Some key factors that need to be considered in undertaking this kind of risk analysis are:
It is all too common for organisations to perform risk analyses only from the perspective of the organisation itself. In most circumstances, however, the effectiveness of the system is heavily dependent on the behaviour of others, and especially of the parties to the transactions. It is therefore vital that the interests of all stakeholders be factored into the analysis (Clarke 1992). Identification and authentication are inherently invasive of people's privacy, in all its dimensions (Clarke 1988, 1997c, 1997e). It is especially crucial that the privacy concerns be investigated, appreciated, and taken into account in designing information systems (Clarke 1996d).
The argument pursued above should not be misinterpreted as denying that identification may be appropriate in a few, particular circumstances, such as:
Given the explosion in Internet usage during the second half of the 1990s, it is essential that particular attention be paid to the application of identification, anonymity and pseudonymity in electronic transactions. Email From: addresses provide only a limited amount of information about the sender, and are in any case 'spoofable'; IP-addresses provided by web-browsers do not directly identify the user; and the data stored and transmitted in cookies is not reliable. Moreover, netizens have shown themselves to be very concerned about freedoms and privacy (Clarke 1996a, 1996f, 1997a, 1997b, 1998a, 1998d, 1999); and attempts by an organisation to achieve authentication of an identifier or pseudonym are subject to an array of countermeasures (e.g. CACM 1999, Clarke et al. 1998, Clarke 1998d).
The result is that net transactions are:
For some purposes, the low level of identification and authentication inherent in the Internet creates difficulties. Examples include government agencies that are seeking a formal undertaking by the other party, e.g. a declaration by a taxpayer; and businesses that are seeking to establish a relationship with a customer, perhaps to protect against default by the customer, and perhaps to enable promotional and selling activities to be directed at that customer in the future. There are a great many circumstances, on the other hand, in which the existence of effective anonymity on the Internet is entirely consistent with the purpose of the interaction, and entirely workable.
The Internet creates the opportunity for far greater reach for both seller and buyer (and hence greater 'market depth', greater contestability and competition, reduced scope for localised monopolies, and hence narrower margins and decreased stability of supply). With greater reach comes increased distance between buyer and seller, and hence the loss of conventional assurances of bona fides.
The growth in consumer electronic commerce has been slow in comparison with other Internet growth metrics, and many commentators have explained this as resulting from a lack of trust (Clarke 1999a). It has been argued by some that the creation of trust is dependent on transactions being identified; and by others that the imposition of identification and authentication requirements, far from engendering trust, will undermine it (Clarke 1997d).
For the majority of the twentieth century, new technologies have tended to be privacy-invasive in nature. During the 1990s, Privacy-Enhancing Technologies (PETs) have emerged. This author has argued for the need also for Privacy-Sympathetic Technologies (PSTs), as a better means of balancing the interests of privacy and accountability.
Organisations need to appreciate the spectrum of alternatives that is available in relation to identification and authentication, to assess their own requirements and the interests of other stakeholders, to take particular care concerning the privacy interests of the individuals concerned, and to select an approach that is appropriate to the circumstances.
Organisations that do so will find that they pay much more attention to intermediate options such as unauthenticated pseudonymity and authenticated pseudonymity, in some cases combined with value authentication or attribute authentication.
This document builds on a large body of prior work by the author, and accordingly involves a large amount of self-reference. Many of the papers cited provide access to the wider literature, in some cases copiously so.
Burkert H. (1997) 'Privacy-Enhancing Technologies: Typology, Critique, Vision' in Agre P.E. & Rotenberg M. (Eds.) (1997) 'Technology and Privacy: The New Landscape' MIT Press, 1997
CACM (1999) 'Internet Privacy: The Quest for Anonymity' Special Section of Commun. ACM 42, 2 (February 1999), at http://www.research.att.com/~lorrie/pubs/cacm-privacy.html
Clarke R. (1988) 'Information Technology and Dataveillance' Comm. ACM 31,5 (May 1988) Re-published in C. Dunlop and R. Kling (Eds.), 'Controversies in Computing', Academic Press, 1991, at http://www.rogerclarke.com/DV/CACM88.html
Clarke R. (1992) 'Extra-Organisational Systems: A Challenge to the Software Engineering Paradigm', Proc. IFIP World Congress, Madrid, September 1992, at http://www.rogerclarke.com/SOS/PaperExtraOrgSys.html
Clarke R. (1993) 'Computer Matching and Digital Identity', Proc. Conf. Computers, Freedom & Privacy, San Francisco, March 1993, at http://www.rogerclarke.com/DV/CFP93.html
Clarke R. (1994a) 'The Digital Persona and its Application to Data Surveillance', The Information Society 10, 2 (June 1994)', at http://www.rogerclarke.com/DV/DigPersona.html
Clarke R. (1994b) 'Human Identification in Information Systems: Management Challenges and Public Policy Issues', Information Technology & People 7,4 (December 1994) 6-37, at http://www.rogerclarke.com/DV/HumanID.html
Clarke R. (1995) 'When Do They Need to Know 'Whodunnit?' The Justification for Transaction Identification: The Scope for Transaction Anonymity and Pseudonymity' Proc. Conf. Computers, Freedom & Privacy, San Francisco, 31 March 1995, at http://www.rogerclarke.com/DV/PaperCFP95.html
Clarke R. (1996a) 'Trails in the Sand', May 1996, at http://www.rogerclarke.com/DV/Trails.html
Clarke R. (1996b) 'Cryptography in Plain Text', Privacy Law & Policy Reporter 3, 4 (May 1996). At http://www.rogerclarke.com/II/CryptoSecy.html
Clarke R. (1996c) 'Crypto-Confusion: Mutual Non-Comprehension Threatens Exploitation of the GII', Privacy Law & Policy Reporter 3, 2 (May 1996), pp. 30-33, at http://www.rogerclarke.com/II/CryptoConf.htmlClarke R. (1996d) 'Privacy, Dataveillance, Organisational Strategy' (the original version was a Keynote Address for the I.S. Audit & Control Association Conf. (EDPAC'96), Perth, 28 May 1996). At http://www.rogerclarke.com/DV/PStrat.html
Clarke R. (1996e) 'Identification, Anonymity and Pseudonymity in Consumer Transactions: A Vital Systems Design and Public Policy Issue' Proc. Conf. 'Smart Cards: The Issues', Sydney, 18 October 1996, at http://www.rogerclarke.com/DV/AnonPsPol.html
Clarke R. (1996f) 'Public Interests on the Electronic Frontier', Invited Address to IT Security '97, 14 & 15 August 1997, Rydges Canberra (August 1997), http://www.rogerclarke.com/II/IIRSecy97.html
Clarke R. (1997a) 'Cookies' February 1977, at http://www.rogerclarke.com/II/Cookies.html
Clarke R. (1997b) 'Privacy and E-Lists', May 1997, at http://www.rogerclarke.com/DV/E-Lists.html
Clarke R. (1997c) 'Introduction and Definitions', August 1997, at http://www.rogerclarke.com/DV/Intro.html
Clarke R. (1997d) 'Promises and Threats in Electronic Commerce', August 1997, at http://www.rogerclarke.com/EC/Quantum.html
Clarke R. (1997e) 'Chip-Based ID: Promise and Peril', for the International Conference on Privacy, Montreal (September 1997), at http://www.rogerclarke.com/DV/IDCards97.html
Clarke R. (1998a) 'Direct Marketing and Privacy', Proc. AIC Conf. on the Direct Distribution of Financial Services, Sydney, 24 February 1998, at http://www.rogerclarke.com/DV/DirectMkting.html
Clarke R. (1998b) 'Smart Card Technical Issues Starter Kit', Centrelink, April 1998, at http://www.rogerclarke.com/DV/SCTISK.html
Clarke R. (1998c) 'Public Key Infrastructure: Position Statement', May 1998, at http://www.rogerclarke.com/DV/PKIPosn.html
Clarke R. (1998d) 'Information Privacy On the Internet: Cyberspace Invades Personal Space' Telecommunication Journal of Australia 48, 2 (May/June1998), at http://www.rogerclarke.com/DV/IPrivacy.html
Clarke R. (1998e) 'Platform for Privacy Preferences: An Overview', Privacy Law & Policy Reporter 5, 2 (July 1998) 35-39, at http://www.rogerclarke.com/DV/P3POview.html
Clarke R. (1998f) 'Platform for Privacy Preferences: A Critique', Privacy Law & Policy Reporter 5, 3 (August 1998) 46-48, at http://www.rogerclarke.com/DV/P3PCrit.html
Clarke R. (1999a) 'Key Issues in Electronic Commerce and Electronic Publishing', Proc. Conf. Information Online and On Disc 99, Sydney, January 1999, at http://www.rogerclarke.com/EC/Issues98.html
Clarke R. (1999b) 'Internet Privacy Concerns Confirm the Case for Intervention', Communications of the ACM 42, 2 (February 1999), at http://www.rogerclarke.com/DV/CACM99.html
Clarke R. (1999c) 'The Legal Context of Privacy-Enhancing and Privacy-Sympathetic Technologies', April 1999, at http://www.rogerclarke.com/DV/Florham.html
Clarke R. (1999d) 'Notes on a Panel Session on Anonymity and Identity in Cyberspace', Computers, Freedom & Privacy Conference, April 1999, at http://www.rogerclarke.com/DV/NotesCFP99.html#AnonId
Clarke R. (1999e) 'Privacy-Enhancing and Privacy-Sympathetic Technologies - Resources', April 1999, at http://www.rogerclarke.com/DV/PEPST.html
Clarke R. (1999f) 'The Real 'Who's Who' of Electronic Commerce: The Identification of Organisations', Working Paper available from the author, April 1999, at http://www.rogerclarke.com/EC/OrgID.html
Clarke R., Dempsey G., Ooi C.N. & O'Connor R.F. (1998a) `Technological Aspects of Internet Crime Prevention', Proc. Conf. 'Internet Crime', Australian Institute for Criminology, Melbourne University, 16-17 February 1998, at http://www.rogerclarke.com/II/ICrimPrev.html
Clarke R., Dempsey G., Ooi C.N. & O'Connor R.F. (1998b) `The Technical Feasibility of Regulating Gambling on the Internet', Proc. Conf. 'Gambling, Technology & Society: Regulatory Challenges for the 21st Century', Rex Hotel Sydney, Potts Point, 7 - 8 May 1998, Australian Institute for Criminology, Melbourne University, at http://www.rogerclarke.com/II/ICrimPrev.html
Clarke R., Dempsey G., Ooi C.N. & O'Connor R.F. (1998b) 'A Primer on Internet Technology', at http://www.rogerclarke.com/II/IPrimer.html
EPIC (1999) 'Online Anonymity Under Attack in the Courts', EPIC Alert, 6.06, 22 April 1999, at http://www.epic.org/alert/EPIC_Alert_6.06.html
Froomkin A.M. (1995) 'Anonymity and Its Enmities' 1995 J. Online L., at http://www.law.cornell.edu/jol/froomkin.htm
Greenleaf G.W. & Clarke R. (1997) 'Privacy Implications of Digital Signatures', Proc. IBC Conference on Digital Signatures, Sydney, March 1997, at http://www.rogerclarke.com/DV/DigSig.html
IPCR (1995) 'Privacy-Enhancing Technologies: The Path to Anonymity' Information and Privacy Commissioner (Ontario, Canada) and Registratiekamer (The Netherlands), 2 vols., August 1995, at http://www.ipc.on.ca/web%5Fsite.eng/matters/sum%5Fpap/papers/anon%2De.htm
Low S., Maxemchuk N.F., Paul S. (1996) 'Anonymous Credit Cards and Its Collusion Analysis', IEEE Trans. on Networking, Dec. 1996, vol. 4, no.6, pp 809-816, at http://www.research.att.com/~nfm/ref.1409.ps
McCullagh D. (1998-) 'Nym Resources', at http://www.well.com/user/declan/nym/
Neumann P.G. (1996) 'Risks of Anonymity' Insider Risks Column, Commun. ACM 39, 12 (December 1996)
PI (1997) 'National ID Cards', at http://www.privacy.org/pi/issues/idcard/
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.
From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 50 million in early 2015.
Sponsored by Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916
Created: 4 February 1999 - Last Amended: 30 April 1999 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/DV/UIPP99.html