Roger Clarke's Web-Site
© Xamax Consultancy Pty Ltd, 1995-2018
|Identity Matters||Other Topics||Waltzing Matilda||What's New|
Review Version of 12 May 2013
SUPERSEDED BY the published version (April 2014)
Roger Clarke **
© Xamax Consultancy Pty Ltd, 2012-13
Available under an AEShareNet licence or a Creative Commons licence.
This document is at http://www.rogerclarke.com/DV/SMTD13.html
Social media services offer users tools for interaction, publishing and sharing, but in return demand exposure of users' selves and of the members of their social networks. The Terms of Service imposed are uniformly privacy-hostile. The practice of social media exhibits a great many distrust influencers, and some are sufficiently strong that they constitute distrust drivers. This paper presents an analytical framework whereby designers of social media services can overcome user distrust and inculculate user trust.
Social media is a collective term for a range of services that support users in exchanging content and pointers to content, but in ways that are advantageous to the service-provider. The services emerged in conjunction with the 'Web 2.0' and 'social networking' notions, during 2004-05 (O'Reilly 2005).
As shown by Clarke (2008b), there was little terminological clarity or coherence about 'Web 2.0'. In addition, understanding of what social media encompasses remains somewhat rubbery, e.g. "Social Media is a group of Internet-based applications that build on the ideological and technological foundations of Web 2.0, and that allow the creation and exchange of User Generated Content" (Kaplan & Haenlein 2010, p.61). Those authors did, however, apply theories in the field of media research (social presence, media richness) and social processes (self-presentation, self-disclosure), in order to propose the classification scheme in Exhibit 1.
The business models of social media service-providers are predicated on re-visits being motivated by voyuerism, and hence are dependent on individuals being stimulated to self-expose, and to expose others. Consumer marketing companies generally perceive the need to convey corporate image and messages, shape consumer opinions, and build purchase momentum; and social media has been harnessed to those purposes. The Kaplan & Haenlein classification scheme is a good fit to the perspectives of both kinds of corporations. On the other hand, through its commitment to the 'consumer as prey' tradition, it fails to reflect the interests of the users who social media services exploit. A classification scheme would better serve users of social media if it focussed on its features and affordances in the areas of human interaction, content broadcast and content collaboration.
Some social media services have proven to be short-lived fads. Some appear to be instances of longer-lived genres, although, even in these cases, waves of differently-conceived services have been crashing over one another in quick succession. Some aspects may mature into long-term features of future services, because they satisfy a deeper human need rather than just a fashion-driven desire.
Consumers' hedonistic desires may be well-served by contemporary social media services. On the other hand, a proportion of users understands that they are being exploited by social media service-providers. The boldness and even arrogance of many of those providers has given rise to a growing body of utterances by influential commentators, which has caused a lot more users to become aware of the extent of the exploitation. Consumer and privacy concerns are legion, and are giving rise to doubts about whether sufficient trust exists for the first decade's momentum in social media usage to be sustained.
Privacy has long loomed as a strategic factor for both corporations and government agencies (Peppers & Rogers 1993, Clarke 1996, Levine et al. 1999, Cavoukian & Hamilton 2002, Clarke 2006b). In Clarke (2008a), it was suggested that the long wait for the emergence of the 'prosumer' may be coming to a close. That term was coined by Toffler in 1980, to refer to a phenomenon he had presaged 10 years earlier (Toffler 1970 pp. 240-258; Toffler 1980 pp. 275-299, 355-356, 366, 397-399). The concept was revisited in Tapscott & Williams (2006). A prosumer is a consumer who is proactive (e.g. is demanding, and expects interactivity with the producer) and/or a producer as well as a consumer (e.g. expects to be able to exploit digital content for mashups). To the extent that a sufficient proportion of consumers do indeed mature into prosumers, consumer dissatisfaction with untrustworthy social media service-providers can be expected to rise, and to influence consumers' choices.
The need therefore exists for means whereby the most serious privacy problems can be identified, and ways can be devised to overcome them. The paper commences with a review of the nature of privacy, and application of the core ideas to social media. The notion of trust is then considered, and operational definitions are proposed for a family of concepts. Implications for social media design are drawn from that analytical framework, including constructive proposals for addressing privacy problems to the benefit of consumers and service-providers alike. Research opportunities arising from the analysis are identified.
This section commences by summarising key aspects of privacy. It then applies them to social media.
Privacy is a multi-dimensional construct rather than a concept, and hence definitions are inevitably contentious. Many of the conventional approaches are unrealistic, serve the needs of powerful organisations rather than those of individuals, or are of limited value as a means of guiding operational decisions. The approach adopted here is to adopt a definition of privacy as an interest, to retain its positioning as a human right, to reject the attempts by business interests to reduce it to a mere economic right, and to supplement the basic definition with a discussion of the multiple dimensions inherent in the construct.
The following definition is of long standing (Morison 1973, Clarke 1996, 1997, 2006a):
Privacy is the interest that individuals have in sustaining a 'personal space', free from interference by other people and organisations
A weakness in discussions of privacy throughout the world is the limitation of the scope to data protection / Datenschutz. It is vital to recognise that privacy is a far broader notion than that. The four-dimensional scheme below has been in consistent use since at least Clarke (1996, 1997), and a fifth dimension is tentatively added at the end of the discussion. A related taxonomy, oriented towards legal protections rather than the interests of the person, is in Solove (2006):
Underpinning the dramatic escalation of privacy threats since about 1995 has been greatly intensified opposition by organisations to anonymity and even pseudonymity, and greatly increased demands for all acts by all individuals to be associated with the individual's 'real identity' (e.g. Krotoski 2012). As a result of decreased anonymity, content digitisation, cloud services, government intrusions and copyright-protection powers, consideration now needs to be given to adding a fifth dimension of Privacy of Personal Experience, to reflect surveillance of individuals' reading and viewing, inter-personal communications, electronic social networks, and physical meetings with other people.
At an early stage, commentators identified substantial privacy threats inherent in Web 2.0, social networking services and social media generally (e.g. Clarke 2004, Harris 2006, Barnes 2006, Clarke 2008b). Although privacy threats arise in relation to all categories of social media, social networking services (SNS) are particularly rich both in inherent risks and in aggressive behaviour by service-providers. This section accordingly pays particular attention to SNS.
One of the earliest SNS, Plaxo, was subjected to criticism at the time of its launch (Clarke 2004). Google had two failures before Google+ - Orkut and Buzz - and all three have been roundly criticised for their serious hostility to the privacy of users and people exposed by their users (e.g. boyd 2008, Helft 2010, Waugh 2012, Bell 2012). However, it is difficult not to focus on Facebook, not so much because it has dominated many national markets for SNS for several years, but rather because it has done so much to test the boundaries of privacy abuse. Summaries of its behaviour are in Bankston (2009), Opsahl (2010), NYT (2010), McKeon (2010), boyd & Hargittai (2010), BBC (2011). Results of a survey are reported in Lankton & McKnight (2011).
After 5 years of bad behaviour by Facebook, Opsahl (2010) summarised the situation as follows: "When [Facebook] started, it was a private space for communication with a group of your choice. Soon, it transformed into a platform where much of your information is public by default. Today, it has become a platform where you have no choice but to make certain information public, and this public information may be shared by Facebook with its partner websites and used to target ads". The widespread publication of several epithets allegedly uttered by Facebook's CEO Zuckerberg, have reinforced the impression of exploitation, particularly "The default is social" and "They 'trust me'. Dumb f..ks".
The conclusion reached in 2012 by a proponent of social media was even more damning: "[Social networking services] profit primarily by using heretofore private information it has collected about you to target advertising. And Zuckerberg has repeatedly made sudden, sometimes ill conceived and often poorly communicated policy changes that resulted in once-private personal information becoming instantly and publicly accessible. As a result, once-latent concerns over privacy, power and profit have bubbled up and led both domestic and international regulatory agencies to scrutinize the company more closely ... The high-handed manner in which members' personal information has been treated, the lack of consultation or even communication with them beforehand, Facebook's growing domination of the entire social networking sphere, Zuckerberg's constant and very public declarations of the death of privacy and his seeming imposition of new social norms all feed growing fears that he and Facebook itself simply can not be trusted" (O'Connor 2012).
Social media features can be divided into three broad categories. In the case of human interaction tools, users generally assume that conversations are private, and in some cases are subject to various forms of non-disclosure conventions. However, social media service-providers sit between the conversation-partners and in many cases store the conversation - thereby converting ephemeral communications into archived statements - and give themselves the right to exploit the contents. Moreover, service-providers strive to keep the messaging flows internal, and hence exploitable by and only by that company, rather than enabling their users to take advantage of external messaging services such as email.
Content broadcast, by its nature, involves publication. Privacy concerns still arise, however, in several ways. SNS seek to capture 'wall-postings' and the efforts of content communities, and encourage and in some cases even enforce self-exposure of profile data. Service-providers may intrude into users' personal space by monitoring content with a view to punishing or discriminating against individuals who receive particular broadcasts, e.g. because the content is in breach of criminal or civil law, is against the interests of the service-provider, or is deemed by the service-provider to be in breach of good taste. The primary concern, however, is that the individual who initiates the broadcast may not be able to protect their identity. This is important where the views may be unpopular with some organisations or individuals, particularly where those organisations or individuals may represent a threat to the person's safety. It is vital to society that 'whistleblowing' be possible. There are contrary public interests, in particular that individuals who commit serious breaches such as unjustified disclosure of sensitive information, intentionally harmful misrepresentation, and incitement to violence, be able to be held accountable for their actions. That justifies the establishment of carefully constructed forms of strong pseudonymity; but it does not justify an infrastructure that imposes on all users the requirement to disclose, and perhaps openly publish, their 'real identity'.
Content collaboration overlaps with broadcasting, but is oriented towards shared or multi-sourced content rather than sole-sourced content. Even indicator-sharing may generate some privacy risk for individuals, such as the casting of a vote in a particular direction on some topic that attracts opprobrium (e.g. paedophilia, racism or the holocaust, but also criticisms of a repressive regime).
The current, somewhat chaotic state of social media services involves significant harm to individuals' privacy, which is likely to progressively undermine suppliers' business models. Constructive approaches are needed to address the problems.
This section proposes a basis for a sufficiently deep understanding of the privacy aspects of social media, structured in a way that guides design decisions. It first reviews the notion of trust. In both the academic and business literatures, the focus has been almost entirely on the positive notion of trust, frequently to the complete exclusion of the negative notion of distrust. It is argued here that both concepts must be appreciated. A framework and definitions are proposed that enable the varying impacts of trust-relevant factors to be recognised and evaluated.
Trust is commonly associated with a state of willingness to expose oneself to risks. Trust originates in family and social settings, and is associated with cultural affinities and inter-dependencies. The following is proposed as an operational definition relevant to the contexts addressed in this paper (Clarke 2002a, 2002b):
Trust is confident reliance by one party on the behaviour of other parties
The importance of trust varies a great deal, depending on the context. Key factors include the extent of the risk exposure, the elapsed time during which the exposure exists, and whether insurance is available, affordable and effective. Trust is most critical where a party has little knowledge of the other party, or far less power than the other party.
The trust concept has been applied outside its original social setting. Of relevance to the present paper, it is much-used in economic contexts, and in particular in relation to transactions conducted electronically between businesses and consumers, popularly referred to as B2C eCommerce. The B2C eCommerce literature contains a great many papers which refer to trust, and which investigate very specific aspects of it. This paper is primarily concerned with factors that relate to the trustworthiness or otherwise of the counter-party to the transaction, rather than for example the quality of the tradeable item, its fit to the consumer's need, the delivery process, the infrastructure and institutions on which the conduct of the transaction depends, or the propensity of consumers to be trusting.
It is feasible for cultural affinities to be achieved in some B2C contexts. For example, consumers' dealings with cooperatives, such as credit unions / Kreditgenossenschaften, may achieve this, because 'they are us'. On the other hand, it is infeasible for for-profit corporations to achieve anything more than an ersatz form of trust with their customers, because corporations law demands that priority be given to the interests of the corporation above all other interests.
The bases on which a proxy for positive trust can be established in B2C eCommerce were analysed in Clarke (2002a). Trust may arise from a direct relationship between the parties (such as a contract, or prior transactions); or from experience (such as a prior transaction, a trial transaction, or vicarious experience). When such relatively strong sources of trust are unavailable, it may be necessary to rely on 'referred trust', such as 'word-of-mouth', reputation, or delegated contractual arrangements. Mere brandnames are a synthetic and ineffective basis for trust. The weakest of all forms is meta-brands, such as 'seals of approval' signifying some form of accreditation by organisations that are no better-known than the company that they purport to be attesting to (Clarke 2001a).
Where the market power of the parties to a transaction is reasonably balanced, the interests of both parties may be reflected in the terms of contracts that they enter into. This is seldom the case with B2C commerce generally, however, nor B2C eCommerce in particular. It is theoretically feasible for consumers to achieve sufficient market power to ensure balanced contract terms, e.g. through amalgamation of their individual buying-power, or through consumer rights legislation. In practice, however, such circumstances seldom arise.
Many recent treatments of trust in B2C eCommerce adopt the postulates of Mayer et al. (1995), to the effect that the main attributes underlying a party's trustworthiness are ability, integrity and benevolence - to which web-site quality has subsequently been added. See for example Lee & Turban (2001) and Salo & Karjaluoto (2007). However, it appears unlikely that a body of theory based on the assumption of altruistic behaviour by for-profit organisations is capable of delivering much of value. In practice, consumer marketing corporations seek to achieve trust by contriving the appearance of cultural affinities. One such means is the offering of economic benefits to frequent buyers, but projecting the benefits using not an economic term but rather one that invokes social factors: 'loyalty programs'. Another approach is to use advertising, advertorials and promotional activities to project socially positive imagery onto a brandname and logo. To be relevant to the practice of social media, research needs to be based on models that reflect realities, rather than embody imaginary constructs such as 'corporate benevolence'.
During the twentieth century's 'mass marketing' and 'direct marketing' eras, consumer marketing corporations were well-served by the conceptualisation of consumer as prey. That stance continued to be applied when B2C eCommerce emerged during the second half of the 1990s. That the old attitude fitted poorly to the new context was evidenced by the succession of failed initiatives documented in Clarke (1999a). These included billboards on the information superhighway (1994-95), closed electronic communities (1995-97), push technologies (1996-98), spam (1996-), infomediaries (1996-99), portals (1998-2000) and surreptitious data capture (1999-). Habits die hard, however, and to a considerable extent consumers are still being treated as quarry by consumer marketers. It is therefore necessary to have terms that refer to the opposite of trust.
The convention has been to assume that trust is either present, or it is not, and hence:
Lack of Trust is the absence, or inadequacy, of confidence by one party in the reliability of the behaviour of other parties
This notion alone is not sufficient, however, because it fails to account for circumstances in which, rather than there being either a positive feeling or an absence of it, there is instead a negative element present. This author accordingly proposes the following additional term and definition.
Distrust is the the active belief by one party that the behaviour of other parties is not reliable
There are few treatments of distrust in the B2C eCommerce literature, but see McKnight & Chervany (2001a, 2001b, 2006).
One further concept is needed, in order to account for the exercise of market power in B2C eCommerce. Most such business is subject to Terms of Service imposed by merchants. These embody considerable advantages to the companies and few for consumers. The Terms imposed by social media companies are among the most extreme seen in B2C eCommerce (Clarke 2008a, 2011). In such circumstances, there is no trust grounded in cultural affinity. The consumer may have a choice of merchants, but their Terms are uniformly consumer-hostile. Hence a separate term is proposed, to refer to a degraded form of trust:
Forced Trust is hope held by one party that the behaviour of other parties will be reliable, despite the absence of important trust factors
The concepts presented in the previous sub-section provide a basis for developing insights into privacy problems and their solutions. In order to operationalise the concepts, however, it is necessary to distinguish several different categories of trust factor.
The definitions in Exhibit 2 use the generic terms 'party' and 'conduct of a transaction', in order to provide broad scope. They encompass both social and economic transactions, and both natural persons as parties - variously in social roles, as consumers, as prosumers, and as producers - and legal persons, including social not-for-profit associations, economic not-for-profits such as charities, for-profit corporations, and government agencies.
A Trust Influencer is a factor that has a positive influence on the likelihood of a party conducting a transaction
A Distrust Influencer is a factor that has a negative influence on the likelihood of a party conducting a transaction
A Trust Driver is a factor that has such a strong positive influence on the likelihood of a party conducting a transaction that it determines the outcome
A Distrust Driver is a factor that has such a strong negative influence on the likelihood of a party conducting a transaction that it determines the outcome
A Driver is a factor that, alone, is sufficient to determine an adoption / non-adoption decision. This is distinct from the 'straw that broke the camel's back' phenomenon. In that case, the most recent Influencer causes a threshold to be reached, but only by adding its weight to other, pre-existing Influencers.
In Exhibit 3, the relationships among the concepts are depicted in the mnemonic form of a see-saw, and are supplemented by commonly used terms associated with each category.
Some trust factors are applicable to consumers generally, and hence the distinctions can be applied to an aggregate analysis of the market for a particular tradeable item or class of items. Attitudes to many factors vary among consumers, however; and hence the framework can be usefully applied at the most granular level, that is, to individual decisions by individual consumers. For many purposes, an effective compromise is likely to be achieved by identifying consumer segments, and conducting analyses from the perspective of each segment. Hence a driver is likely to be described in conditional terms, such as 'for consumers who are risk-averse, ...', 'for 'consumers who live outside major population centres, ...', and 'for consumers who are seeking a long-term service, ...'.
The commercial aspects of the relationship between merchant and consumer offer many examples of each category of trust factor. Distrust Drivers include proven incapacity of the merchant to deliver, such as insolvency. On the other hand, low cost combined with functional superiority represents a Trust Driver. Non-return policies are usually a Distrust Influencer rather than a Distrust Driver, whereas 'return within 7 days for a full refund' is likely to be a Trust Influencer. Merchants naturally play on human frailties when they create compulsion factors, through such devices as '50% discount, for today only'.
Beyond commercial aspects, other clusters of trust factors include the quality, reliability and safety of the tradeable item, its fit to the consumer's circumstances and needs, and privacy. This paper is concerned with the last of these.
The analytical framework introduced in the previous section provides a basis for improving the design of social media services in order to address the privacy concerns of consumers and the consequential privacy risks of service-providers.
The categorisation of trust factors provides designers with a means of focussing on the aspects that matter most. Hedonism and fashion represent drivers for the adoption and use of services. The first priority therefore is to identify and address the Distrust Drivers which undermine the service-provider's business.
Some privacy factors will have a significant negative impact on all or most users. Many, however, will be relevant only to particular customer-segments, or will only be relevant to particular users in particular circumstances or for a particular period of time. An important example of a special segment is people whose safety is likely to be threatened by exposure, perhaps of their identity, their location, or sensitive information about them. A comprehensive analysis of the categories of 'persons at risk' is in GFW (2011). Another important segment is people who place very high value on their privacy, preferring (for any of a variety of reasons) to stay well out of the public eye.
A detailed analysis of specific privacy features is in a companion paper (Clarke 2013). A couple of key examples of features that may be Distrust Drivers are reliance on what was referred to in the above framework as 'forced trust', requirements that the user declare their commonly-used identity or so-called 'real name', and default provision of the user's geo-location to the service-provider. The distrust analysis needs to extend to factors that are influencers rather than drivers, because moderate numbers of negative factors, particularly if they frequently rise into a user's consciousness, are likely to morph into an aura of untrustworthiness, and thereby cause the relationship with the user to be fragile rather than loyal.
Examples of active measure that can be used to negate distrust include transparency in relation to Terms of Service (Clarke 2005), the conduct of a privacy impact assessment, including focus groups and consultations with advocacy organisations, mitigation measures where privacy risks arise, prepared countermeasures where actual privacy harm arises, and prepared responses to enquiries and accusations about privacy harm.
Although overcoming distrust offers the greater payback to service-providers, they may benefit from attention to Trust Drivers and Trust Influencers as well, and users certainly would gain from it. Factors of likely relevance include privacy-settings that are comprehensive, clear and stable; conservative defaults; means for user management of their profile-data; consent-based rather than unilateral changes to Terms of Service; express support for pseudonymity and multiple identifiers; and inbuilt support and guidance in relation to proxy-servers and other forms of obfuscation. Some users will be attracted by the use of peer-to-peer (P2P) architecture with dispersed storage and transmission paths rather than the centralisation of power that is inherent in client-server architecture.
This paper and its companion (Clarke 2013) draw on a combination of long experience in the privacy field and analyses of the copious media coverage of privacy issues in the social media services marketplace. A formal research literature is only now emergent, and opportunities exist for significant contributions to be made.
The analysis conducted in this paper readily gives rise to a number of research questions. For example:
An example of the kinds of trade-off research that would pay dividends is provided by (Kaplan & Haenlein 2010, p.67: "In December 2008, the fast food giant developed a Facebook application which gave users a free Whopper sandwich for every 10 friends they deleted from their Facebook network. The campaign was adopted by over 20,000 users, resulting in the sacrificing of 233,906 friends in exchange for free burgers. Only one month later, in January 2009, Facebook shut down Whopper Sacrifice, citing privacy concerns. Who would have thought that the price of a friendship is less than $2 a dozen?" . The vignette suggests that the presumption that people sell their privacy very cheaply may have a corollary - that people's privacy can be bought very cheaply as well.
A wide range of research techniques can be applied to such studies (Galliers 1992, Clarke 2001b, Chen & Hirschheim 2004). Surveys deliver 'convenience data' of limited relevance, testing what people say they do, or would do - and all too often merely what they say they think. Other techniques hold much more promise as means of addressing the research questions identified above, in particular field studies, laboratory experiments, demonstrators, and open-source code to implement services or features.
Privacy concerns about social media services vary among user segments, and over time. Some categories of people, some of the time, are subject to serious safety risks as a result of self-exposure and exposure by others; many people, a great deal of the time, are subject to privacy harm; and some people simply dislike being forced to be open and much prefer to lead a closed life. From time to time, negative public reactions and media coverage have affected many social media providers, including Facebook, Google and Instagram (Clarke 2012). There are signs that the occurrences are becoming more frequent and more intensive.
The analytical framework presented in this paper offers a means whereby designers can identify aspects of their services that need attention, either to prevent serious harm to their business or to increase the attractiveness of their services to their target markets. Researchers can also apply the framework in order to gain insights into the significance of the various forms of privacy concern.
Bankston K. (2009) 'Facebook's New Privacy Changes: The Good, The Bad, and The Ugly' Electronic Frontier Foundation, 9 December 2009, at https://www.eff.org/deeplinks/2009/12/facebooks-new-privacy-changes-good-bad-and-ugly
Barnes S.B. 'A privacy paradox: Social networking in the United States' First Monday 11, 9 (September 2006), at http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/viewArticle/1394/1312%23
BBC (2011) 'Facebook U-turns on phone and address data sharing' BBC News, 18 January 2011, at http://www.bbc.com/news/technology-12214628
Bell E. (2012) 'The real threat to the open web lies with the opaque elite who run it' The Guardian, 16 April 2012, at http://www.guardian.co.uk/commentisfree/2012/apr/16/threat-open-web-opaque-elite
boyd d. (2008) 'Facebook's Privacy Trainwreck: Exposure, Invasion, and Social Convergence' Convergence: The International Journal of Research into New Media Technologies 14, 1 (2008) 13-20
boyd d. & Hargittai E. (2010) 'Facebook privacy settings: Who cares?' First Monday 15, 8 (July 2010), at http://firstmonday.org/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/3086/2589
Cavoukian A. & Hamilton T. (2002) 'The Privacy Payoff: How Successful Businesses Build Consumer Trust' McGraw-Hill Ryerson Trade, 2002
Chen W. & Hirschheim R. (2004) 'A paradigmatic and methodological examination of information systems research from 1991 to 2001' Information Systems Journal 14, 3 (2004) 197-235
Clarke R. (1996) 'Privacy, Dataveillance, Organisational Strategy' Keynote Address, I.S. Audit & Control Association Conf. (ISACA / EDPAC'96), Perth, 28 May 1996, at http://www.rogerclarke.com/DV/PStrat.html
Clarke R. (1997) 'Introduction to Dataveillance and Information Privacy, and Definitions of Terms' Xamax Consultancy Pty Ltd, August 1997, at http://www.rogerclarke.com/DV/Intro.html
Clarke R. (1999a) 'The Willingness of Net-Consumers to Pay: A Lack-of-Progress Report' Proc. 12th International Bled Electronic Commerce Conference, Bled, Slovenia, June 7 - 9, 1999, at http://www.rogerclarke.com/EC/WillPay.html
Clarke R. (1999b) 'Person-Location and Person-Tracking: Technologies, Risks and Policy Implications' Proc. 21st International Conf. Privacy and Personal Data Protection, Hong Kong, September 1999. Revised version published in Info. Techno. & People 14, 1 (2001) 206-231, at http://www.rogerclarke.com/DV/PLT.html
Clarke R. (2001a) 'Meta-Brands' Privacy Law & Policy Reporter 7, 11 (May 2001) , at http://www.rogerclarke.com/DV/MetaBrands.html
Clarke R. (2001b) 'If e-Business is Different Then So is Research in e-Business' Proc. IFIP TC8 Working Conference on E-Commerce/E-Business, Salzburg, 22-23 June 2001, at http://www.rogerclarke.com/EC/EBR0106.html
Clarke R. (2002a) 'Trust in the Context of e-Business' Internet Law Bulletin 4, 5 (February 2002) 56-59, at http://www.rogerclarke.com/EC/Trust.html
Clarke R. (2002b) 'e-Consent: A Critical Element of Trust in e-Business' Proc. 15th Bled Electronic Commerce Conference, Bled, Slovenia, 17-19 June 2002, at http://www.rogerclarke.com/EC/eConsent.html
Clarke R. (2004) 'Very Black 'Little Black Books'' Xamax Consultancy Pty Ltd, February 2004, at http://www.rogerclarke.com/DV/ContactPITs.html
Clarke R, (2005) 'Privacy Statement Template' Xamax Consultancy Pty Ltd, December 2005, at http://www.rogerclarke.com/DV/PST.html
Clarke R. (2006a) 'What's 'Privacy?' Submission to the Australian Law Reform Commission, Xamax Consultancy Pty Ltd, July 2006, at http://www.rogerclarke.com/DV/Privacy.html
Clarke R. (2006b) 'Make Privacy a Strategic Factor - The Why and the How' Cutter IT Journal 19, 11 (October 2006), at http://www.rogerclarke.com/DV/APBD-0609.html
Clarke R. (2008a) 'B2C Distrust Factors in the Prosumer Era' Proc. CollECTeR Iberoamerica, Madrid, 25-28 June 2008, pp. 1-12, Invited Keynote Paper, at http://www.rogerclarke.com/EC/Collecter08.html
Clarke R. (2008b) 'Web 2.0 as Syndication' Journal of Theoretical and Applied Electronic Commerce Research 3,2 (August 2008) 30-43, at http://www.jtaer.com/portada.php?agno=2008&numero=2#, Preprint at http://www.rogerclarke.com/EC/Web2C.html
Clarke R. (2011) 'The Cloudy Future of Consumer Computing' Proc. 24th Bled eConference, June 2011, at http://www.rogerclarke.com/EC/CCC.html
Clarke R. (2012) 'Vignettes of Corporate Privacy Disasters' Xamax Consultancy Pty Ltd, December 2012, at http://www.rogerclarke.com/DV/PrivCorp.html
Clarke R. (2013) 'Consumer-Oriented Social Media: The Identification of Key Characteristics' Xamax Consultancy Pty Ltd, January 2013, at http://www.rogerclarke.com/II/COSM-1301.html
Clarke R. & Wigan M.R. (2011) 'You Are Where You've Been The Privacy Implications of Location and Tracking Technologies' Journal of Location Based Services 5, 3-4 (December 2011) 138-155, at http://www.rogerclarke.com/DV/YAWYB-CWP.html
Galliers, R.D. (1992) 'Choosing Information Systems Research Approaches', in Galliers R.D. (ed., 1992) 'Information Systems Research: Issues, Methods and Practical Guidelines', Blackwell, 1992. pp. 144-162
GFW (2011) 'Who is harmed by a "Real Names" policy?' Geek Feminism Wiki, at http://geekfeminism.wikia.com/wiki/Who_is_harmed_by_a_%22Real_Names%22_policy%3F, accessed 3 August 2011
Harris W. (2006) 'Why Web 2.0 will end your privacy' bit-tech.net, 3 June 2006, at http://www.bit-tech.net/columns/2006/06/03/web_2_privacy/
Helft M. (2010) 'Critics Say Google Invades Privacy With New Service' The New York Times, 12 February 2010, at http://www.nytimes.com/2010/02/13/technology/internet/13google.html?_r=1
Kaplan A.M. & Haenlein M. (2010) 'Users of the world, unite! The challenges and opportunities of social media' Business Horizons 53, 1 (Jan-Feb 2010) 59-68, slide-set at http://www.slideshare.net/studente1000/kaplan-andreas-m-haenlein-michael-2010-users-of-the-world-unite-the-challenges-and-opportunities-of-social-media-business-horizons-vol-53-issue-1-p-5968
Krotoski A. (2012) 'Online identity: is authenticity or anonymity more important?' The Guardian, at April 2012, at http://www.guardian.co.uk/technology/2012/apr/19/online-identity-authenticity-anonymity/print
Lankton N. & McKnight D. H. (2011) 'What Does it Mean to Trust Facebook? Examining Technology and Interpersonal Trust Beliefs' The Data Base for Advances in Information Systems 42, 2 (2012) 32-54
Lee M.K.O. & Turban E. (2001) 'A Trust Model for Consumer Internet Shopping' Int'l J. of Electronic Commerce 6, 1 (Fall 2001) 75-91
Levine R., Locke C., Searls D. & Weinberger D. (1999) 'The Cluetrain Manifesto: The End of Business As Usual' Perseus Books, 1999
McKeon M. (2010) 'The Evolution of Privacy on Facebook' Self-Published, May 2010, at http://mattmckeon.com/facebook-privacy/
McKnight D.H. & Chervany N.L. (2001a) 'While trust is cool and collected, distrust is fiery and frenzied: A model of distrust concepts' Proc. 7th Americas Conference on Information Systems, 2001, pp. 883-888
McKnight D.H. & Chervany N.L. (2001b) 'Trust and Distrust Definitions: One Bite at a Time' in R. Falcone, M. Singh, and Y.-H. Tan (Eds.): Trust in Cyber-societies, LNAI 2246, Springer-Verlag Berlin Heidelberg 2001, pp. 27-54
McKnight D.H. & Chervany N.L. (2006) 'Distrust and Trust in B2C E-Commerce: Do They Differ?' Proc. ICEC'06, August 14-16, 2006, Fredericton, Canada
Mayer R.C., Davis J.H. & Schoorman F.D. (1995) 'An Integrative Model of Organizational Trust' Academy of Management Review 20, 3 (1995) 709-734
Morison W.L. (1973) 'Report on the Law of Privacy' Govt. Printer, Sydney 1973
NYT (2010) 'Facebook Privacy: A Bewildering Tangle of Options' The New York Times, 12 May 2010, at http://www.nytimes.com/interactive/2010/05/12/business/facebook-privacy.html
O'Connor R. (2012) 'Facebook is Not Your Friend' Huffington Post, 15 April 2012, at http://www.huffingtonpost.com/rory-oconnor/facebook-privacy_b_1426807.html
O'Reilly T. (2005) 'What Is Web 2.0? Design Patterns and Business Models for the Next Generation of Software' O'Reilly, 30 September 2005, at http://www.oreillynet.com/lpt/a/6228
Peppers D. & Rogers M. (1993) 'The One to One Future : Building Relationships One Customer at a Time', Doubleday, 1993
Salo J. & Karjaluoto H. (2007) 'A conceptual model of trust in the online environment' Online Information Review 31, 5 (September 2007) 604 - 621
Solove D.J. (2006) 'A Taxonomy of Privacy' University of Pennsylvania Law Review 154, 3 (January 2006) 477-560
Tapscott D. & Williams A.D. (2006) 'Wikinomics: How Mass Collaboration Changes Everything' Portfolio, 2006
Toffler A. (1970) 'Future Shock' Pan, 1970
Toffler A. (1980) 'The Third Wave' Pan, 1980
Wigan M.R. & Clarke R. (2013) 'Big Data's Big Unintended Consequences' IEEE Computer 46, 3 (June 2013), at http://www.rogerclarke.com/DV/BigData-1303.html
This paper has been in gestation for a decade. Its drafting was stimulated by an invitation from Hans Christian Juul to visit Roskilde University in Denmark in June 2012 and present a seminar on privacy, trust and user-involvement, stressing the research perspective. This was followed by an invitation from Kiyoshi Murata and Andrew Adams at Meiji University to present at the Asian Privacy Scholars Network Conference in Tokyo in November 2012 and at an associated research workshop. The feedback from Hans Christian, Andrew and participants in those events materially assisted in clarification of the analysis.
Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in the Cyberspace Law & Policy Centre at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.
From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 50 million in early 2015.
Sponsored by Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916
Created: 19 April 2012 - Last Amended: 12 May 2013 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/DV/SMTD13.html