Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

'Info Infrastructure 1985 and 2018'

The Information Infrastructures of 1985 and of 2018:
The Sociotechnical Context of Computer Law & Security

Revised Version of 10 April 2018

Computer Law & Security Review 34, 4 (Jul-Aug 2018) 677-700

Roger Clarke and Marcus Wigan **

© Xamax Consultancy Pty Ltd and Marcus Wigan, 2017-18

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/II/IIC18.html

The previous version is at http://www.rogerclarke.com/II/IIC18-180320.html


Abstract

This article identifies key features of the sociotechnical contexts of computer law and security at the times of this journal's establishment in 1985, and of its 200th Issue in 2018. The infrastructural elements of devices, communications, data and actuator technologies are considered first. Social actors as individuals, and in groups, communities, societies and polities, together with organisations and economies, are then interleaved with those technical elements. This provides a basis for appreciation of the very different challenges that confront us now in comparison with the early years of post-industrialism.


Contents


1. Introduction

The field addressed by Computer Law & Security Review (CLSR) during its first 34 volumes, 1985-2018, has developed within an evolving sociotechnical context. A multi-linear trace of that context over a 35-year period would, however, be far too large a topic to address in a journal article. This article instead compares and contrasts the circumstances that applied at the beginning and at the end of the period, without any systematic attempt either to track the evolution from prior to current state or to identify each of the disruptive shifts that have occurred. This sacrifices developmental insights, but it enables key aspects of contemporary challenges to be identified in a concise manner.

The article commences by identifying what the authors mean by 'sociotechnical context'. The two main sections then address the circumstances of 1985 and 2018. In each case, consideration is first given to the information infrastructure whose features and affordances are central to the field of view, and then to the activities of the people and organisations that use and are used by information technologies. Implications of the present context are drawn, for CLSR, its contributors, and its readers, but more critically for society.

The text can of course be read in linear fashion. Alternatively, readers interested specifically in assessment of contemporary IT can skip the review of the state in 1985 and go directly to the section dealing with 2018. It is also feasible to read the Implications section first, and then return to earlier sections in order to identify the elements of the sociotechnical context that have led the authors to those inferences.

The focus of this article is not on specific issues or incremental changes, because these are identified and addressed on a continuing basis by CLSR's authors. The concern here is with common factors and particularly with discontinuities - the sweeping changes that are very easily overlooked, and that only emerge when time is taken to step back and consider the broader picture. The emphasis is primarily on social impacts and public policy issues, because the interests of business and government organisations are already strongly represented in this journal and elsewhere.


2. The Sociotechnical Context

The term 'socio-technical system' was coined by Trist and Emery at the Tavistock Institute in the 1950s, to describe systems that involve complex interactions firstly among people and technology, and secondly between society's complex infrastructures and human behaviour (Emery 1959). The notion's applicability extends from primary work systems and whole-organisation systems to macro-social systems at the levels of communities, sectors and societies (Trist 1981). Studies that reflect the sociotechnical perspective take into account the tension and interplay between the objectives of organisations on the one hand, and humanistic values on the other (Land 2000).

The review presented in this article is framed within a sociotechnical setting. The necessarily linear representation of a complex system may create the appearance of prioritising some factors and accentuating some influences in preference to others, and even of creating the appearance of causal relationships. Except where expressly stated, however, that is not the intention.

Technological factors are considered first. Information infrastructure is discussed in sub-sections addressing each of the following four segments:

  1. devices, including both hardware and the software running in processors embedded in them
  2. communication technologies, both locally and over distance
  3. data, variously as raw material, work-in-process and manufactured product
  4. means whereby action is taken in the real world. Rather than 'robotics', which carries many overtones, the expression 'actuator technologies' is preferred

The focus then turns to what are referred to here as 'psycho-socio-economic factors'. For the 1985 context, the discussion commences at the level of the individual human, then moves via small groups to communities and societies, and finally shifts to economies and polities. The circumstances of 2018, on the other hand, are more usefully discussed by considering those segments in a different sequence. In order to keep the reference list manageable, citations have been provided primarily in relation to the more obscure and the possibly contentious propositions.


3. 1985

CLSR was established before the end of the Cold War, as the world's population approached 5 billion. (By 2018, it had leapt by 50%). Ronald Reagan began his second term as US President, Mikhail Gorbachev became the leader of the Soviet Union, Maggie Thatcher was half-way through her Premiership of the UK, Helmut Kohl was early in his 16-year term as Chancellor of Germany, and Nelson Mandela's release was still 5 years away. The European Union did not yet exist, and its predecessor, the European Economic Community, comprised only 11 of the 28 EU members of 2018. Of more direct relevance to the topic of the article, Nintendo was released, the TCP/IP-based Internet was 2 years old, and adoption of the Domain Name System had only just commenced.

In 1985, academic books existed in the area of 'computers and law', notably Bing & Selmer (1980) and Tapper (1982), and a range of articles had appeared in law journals. Although the first specialist journal commenced in 1969 (Rutgers Computer and Technology Law Journal), very few existed at the time of CLSR's launch, and indeed 'computers and law' special issues in established journals were still uncommon.


3.1 Information Infrastructure in 1985

An indication of the technology of the time is provided by an early-1985 special issue in the Western New England Law Review (7, 3), which included articles on legal protections for software, software contracts, computer performance claims, the law relating to bulletin board activities, computer crime, financial and tax accounting for software , and privacy. The editorial referred also to the regulation of telecommunications and of the electromagnetic spectrum, regulation of banking records and of automated teller machines (ATMs), regulation of computer-related staff activities, the assessment of value of computer-readable data, software licensing cf. software sale, and taxation of services rather than goods.

The broader concepts of informatics, information technology (IT) and information infrastructure had yet to displace computing as the technical focus. Communications of the ACM in 1984-86 (Vols 27-29) identified the topics of computer abuse, computer crime, privacy, the legal protection of computer software, software 'piracy', computer matching, trustworthiness of software, cryptography, data quality and due process, hacking and access control. IEEE Computer (Vols 17-19) contained very little of relevance, but did add insurance against computer disasters, and offered this timeless encapsulation of technological optimism: "The technology for implementing semi-intelligent sub-systems will be in place by the end of this century" (Lundstrom & Larsen 1985). Computers & Security (still very early in its life, at Vols 3-5) added security of personal workstations, of floppy diskettes and of small business computers, data protection, information integrity, industrial espionage, password methods, EDP (electronic data processing) auditing, emanation eavesdropping, and societal vulnerability to computer system failures.

The intended early focus of CLSR can be inferred from a few quotations from Volume 1. "'Hacking' ... has now spread ... on an increasing and worrying scale ... many of the computers on which important information is filed are the large mainframes of the 1970's technological era" (Saxby 1985a, p.5). "Many companies [that transmit data to mainframes via telephone lines] do so on the basis of dedicated leased lines to which there is no external access" (Davies 1985, p.4). "Most people will have heard of the Data Protection Act [1984] by now. However you may not have begun to think about what it will require you to do. In order to avoid a last minute panic now is the time to start planning for its implementation" (Howe 1985, p.11). "Where does the law stand in the face of [information being now set free from the medium in which it is stored]? The answer is in a somewhat confused state" (Saxby 1985b, p.i).

In Vol. 1, Issue 4, it was noted that "Specific areas to be addressed will include communications, the growth in 'value added' services and the competition to British Telecom". During the decade following the journal's establishment, the longstanding State-run Postal, Telegraph and Telephone service agencies (PTTs) were in the process of being variously dismembered, subjected to competition and privatised. The editorial continued "Also to be covered [are] the development of optical disk technology, its implications for magnetic media, and the development of international networking standards" (p.30). In 1, 5: "Licences [for a UK cellular radio service] were granted to two consortia ... The licences stipulated a start date of January 1984 and required 90% of the population to be covered by cellular radio by the end of the decade. ... By November 1985 both licencees were claiming faster than expected market penetration" (p.18).

The first-named author conducted a previous study of electronic interaction research during the period 1988-2012 (Clarke & Pucihar 2013). In support of that work, an assessment was undertaken of the state of play of relevant infrastructure in 1987 (Clarke 2012c). Applying and extending that research, the features of information infrastructure are identified under four headings.

(1) Devices

Corporate devices were a mix of mainframes, in particular the mature era of the IBM 370 series, and mature mini-computers such as DEC VAX. Micro-computers were just beginning to eat into the mini-computer's areas of dominance, with SUN Workstations prominent among them. Corporate computers were accessed by means of 'dumb terminals', which were mostly light-grey-on-dark-grey cathode ray tubes, with some green or orange on black. Connections between micro-computers and larger devices became practicable during the 1980s, and colour screens emerged from the end of the 1980s; but flat screens only replaced bulky cathode-ray tubes (CRTs) from about 2000.

Consumer devices were primarily desktop 'personal computers'. These had seen a succession of developments in 1974, 1977 and 1980-81, followed by the Apple Mac, which during the year prior to CLSR's launch had provided consumer acccess to graphical user interfaces (GUIs). Data storage comprised primarily floppy disks with a capacity of 800K - 1.44MB. Hard disks, initially also with very limited capacity, gradually became economically feasible during the following 5-10 years. A few 'luggable' PCs existed. During the following decade, progress in component miniaturisation and battery technologies enabled their maturation into 'portables' or 'laptops'.

Handheld devices were only just emergent, in the form of Personal Digital Assistants (PDAs), but their functionality and their level of adpotion both remained limited for a further two decades until first the iPhone in 2007, with its small screen but computer-like functions, and then the breakthrough larger-screen tablet, the iPad in 2010.

Systems software was still at the level of the MVS/CICS operating system (OS) on mainframes, VMS on DEC 11 series mini-computers, Unix on increasingly capable micro-computers such as SUN Workstations, and PCDOS and Mac OS v.2 on PCs. Linux was not released until 1991. Wintel machines did not have a workable GUI-based OS until the mid-1990s.

Software development tools had begun with machine-language and then assembler languages, whose structure was dictated by the machine's internal architecture. By 1985, the dominant approach involved algorithmic/procedural languages, which enable the expression of logical solutions to problems. These had recently been complemented by so-called fourth generation languages (4GLs), whose structure reflects the logic of the problems themselves. The 5th generation was emergent in the form of expert systems, in which the logic is most commonly expressed in the form of rules (Hayes-Roth et al. 1983).

A significant proportion of software development was originally conducted within academic and other research contexts. The approach tended to be collaborative in nature, with source-code often readily accessible and exploitable, as epitomised by Donald Knuth's approach with the TeX typesetting system c. 1982. The free software movement built on this legacy. Meanwhile, particularly from the time when the then-dominant mainframe supplier, IBM, 'unbundled' its software in 1969, corporations perceived the opportunity to make substantially greater profits by claiming and exercising rights over software. The free software and proprietary software approaches have since run in parallel. A variant of free software referred to as 'open source' software became well-known in business circles from c. 1998.

Application software had reached varying levels of sophistication. At enterprise level, airline reservation systems, ATM networks and large-value international funds transfer (SWIFT) were well-established, and EFT/POS systems were emergent. However, financial management systems (FMIS) were still almost entirely intra-organisational. Enterprise resource planning (ERP) business management suites, and most forms of supply chain system, only emerged during the following decade. Free-text search-capabilities existed, but only as the ICL Status and IBM STAIRS products, used for large-scale text-libraries, and available to few people. The majority of application software continued to be constructed in-house, although pre-written software packages had by then achieved significant market-share, and became prevalent for mainstream financial, administrative and business management applications during the following two decades.

For personal computers, independent word processing and spreadsheet packages existed, but integrated office suites emerged only 5-10 years later, c. 1990-95. Many market segments existed. For example, significant impetus to software development techniques had been provided by games from 1977-1985, and this has continued well into the present century. In the civil engineering field, an indication of the volume and sophistication of contemporary micro-computer applications is provided by Wigan (1986). Areas of strong growth in PC applications during the years immediately after CLSR's launch included desktop publishing and the breakthrough implementation of hypertext in the form of Apple's HyperCard (Nelson 1965, Nelson 1980, Nielsen 1990). The arrival of database management systems (DBMS) for PCs stimulated the development of more data-intensive PC-based applications. These began to be over-run by (application) software as a service (SaaS) from c. 2005. Text-searching on PCs, and then on the Internet, did not start becoming available until a decade after CLSR's launch. Although the concept of self-replicating code was emergent from the early 1970s, and the notion of a 'worm' originated in Brunner (1975), the terms 'trojan' and 'computer virus' were only just entering usage at the time that CLSR launched, and the first widespread malware infestation, by the Jerusalem virus, occurred 2 years afterwards.

(2) Communications

At enterprise level, local data communications - i.e. within a building, up to the level of a single corporate campus such as a factory, power-plant, hospital or university - used various proprietary local area network (LAN) technologies, increasingly 10Mbps Ethernet segments, and bridges between them. Ethernet supporting 100Mbps versions became available only in the mid-1990s and 1Gbps from about 2000. Telecommunications (i.e. over distance) had been available in the form of PTT-run telegraph services from the 1840s and in-house in the form of telex ('teleprinter exchange') from the 1930s. Much higher-capacity connections were available, but they required an expensive 'physical private network' or the services of a Value-Added Network (VAN) provider. The iron grip that PTTs held over telecommunications services was only gradually released, commencing around the time of CLSR's launch and continuing through the next decade. High-capacity fibre-optic cable was deployed in national backbones commencing in the mid-1980s, but substantial improvements in infrastructure availability and costs took a further the notional 'last mile' to users' premises using existing twisted-pair copper cables.

The Internet dates to the launch of the inter-networking protocol pair TCP/IP in 1983, but in 1985 its availability was largely limited to scientific and academic communities, primarily within the US. The Domain Name System was launched 3 months before CLSR's first issue. However, the .uk domain, even though it was one of the first delegated, was not created until a few weeks after CLSR's first Issue. The Internet only gradually became more widely available, commencing 8 years later in 1993, and the innovation that it unleashed began in the second half of the 1990s (Clarke 1994c, 2004a).

The prospect existed of local communications for small-business and personal use. Direct PC-to-PC connection was possible, and the 230Kbps Appletalk LAN was available, but for Macintoshes only. Ethernet became economically feasible for small business and consumers within a few years of CLSR's launch, using 10BaseT twisted-pair cabling, but adoption was initially slow. Wireless local area networks (wifi) did not become available until halfway through CLSR's first 35 years, from the early 2000s.

Telecommunications services accessible by individuals had begun with the telegraph in the 1840s for text, and the telephone in the 1870s for voice. In the years preceding CLSR's launch, videotex became available, including in the UK as Prestel from 1979, and most successfully in France as Minitel - 1982-2012 (Kramer 1993, Dauncey 1997). In 1985, practical telecommunications support for small-business and personal needs comprised 2400bps modems transmitting digital signals over infrastructure designed for analogue signals, and bulletin board systems (BBS). Given the very low capacity and reliability, it was more common to transfer data by means of floppy disks, c. 1MB at a time. It took a further decade, until the mid-1990s, for modem speeds to reach 56Kbps -- provided that good quality copper telephone-cabling was in place. Early broadband services -- in many countries, primarily ADSL -- became economic for small business and consumers only during the late 1990s, c. 15 years after CLSR's establishment.

Fax transmission was entering the mainstream. As modem speeds slowly grew from 2.4 to 56Kbps over the period 1985-95, fax machine installations and traffic exploded. Then, from the mid-to-late 1990s, fax usage declined almost as rapidly as it had grown, as Internet-based email and email-attachments offered a more convenient and cheaper way to perform the same functions and more besides.

In 1985, analogue/1G cellular networks had only recently commenced operation, and supported voice only. 2G arrived only after a further 7-8 years, in the early 1990s, including SMS/'texting' from the mid-1990s. 3G arrived another decade later, in the early 2000s, and matured to offer data transmission that was expensive and low-bandwidth. 4G followed another decade on, in the early 2010s, and 5G is emergent in 2018. In short, in 1985, handheld network-based computing was a quarter-century away.

(3) Data

Data storage technology was dominated by magnetic disk for operational use, and magnetic tape for archival. Optical data storage was emergent in the form of CDs, later followed by higher-density DVD format. The recording medium transpired to have a short viability of c. 5-10 years. Portable solid-state storage devices (SSDs) emerged from the early 1990s and became increasingly economic for widespread use from c.2000. Optical technology was quickly overtaken by ongoing improvements in hard disk device (HDD) capacity and economics, and by reductions in the cost of SSDs, and usage was in rapid decline by 2010.

In 1985, textual and numeric data formats used 7-bit ASCII encoding, which was limited to 128 characters and hence to the Roman alphabet without diacritics such as umlauts. Apple's Macintosh enabled the custom-creation of fonts from about the time of CLSR's launch, bringing proportional fonts with it. However, the extended, 8-bit UTF-8 Standard, supporting 256 characters and hence many of the variants of the Roman alphabet, became available only in the early 1990s, and did not achieve dominance on the Web until c. 2010. Communications using all other character-sets depends on the use of multi-byte formats, which became available in standardised form later still.

For structured forms of data, data-management depended on a range of file-structures, with early forms of database management system (DBMS) in widespread use. Relational DBMS were emergent as CLSR launched. Adoption at enterprise level was brisk, but slower in small business and especially in personal contexts. On the other hand, effective data management tools for sound, image and video were decades away.

Standardisation and interoperability of record-formats and documents had begun with Electronic Data Interchange (EDI) standards, which were in limited use for some business documents before 1985, although primarily within the US. Standardisation has proceeded in a desultory manner across many industries, but over an extended period. Since the emergence of the Web, both formal standards such as XML Document Type Definitions (DTDs) and 'lite' versions of data-structure protocols such as JavaScript Object Notation (JSON) have been applied. In 1985, AutoCAD was emergent, and was beginning to provide de facto standard data formats for design data.

Data-compression standards for audio were emergent in 1985, and matured in the early 1990s. Image compression, by means of the still-much-used .gif and .jpg formats, emerged in the years following the journal's launch, with video compression becoming available from the mid-1990s. The impetus for each was growth in available bandwidth sufficient to satisfy the geometrically greater needs of audio, image and video respectively.

(4) Actuator Technologies

Remotely-operated garage-door openers date to the early 1930s, and parking-station boom-gate operations to the 1950s. Electro-mechanical, pneumatic and hydraulic means of acting on the real world were commonplace in industry during the mid-to-late 20th century. Computerised control systems that enabled the operation of actuators under program control were emergent when CLSR began publication. They have matured, but most are by their nature embedded inside more complex artefacts, e.g. in cars, initially in fuel-mix systems but subsequently in many sub-systems, and in whitegoods, and heating and cooling systems.

In such areas as utilities (water, gas, electricity) and traffic management, supervisory control and data acquisition (SCADA) systems were well-established in 1985. However, they operated on physical private networks, and hence these also attracted limited attention in law and policy discussions.

In secondary industry, at the time CLSR was launched in the mid-1980s, robotic tools were being integrated into automobile production-lines. The techniques have been adopted somewhat more widely since then, for example in large warehouse complexes and large computer data archives.

Among the kinds of systems that were more obvious to people, and that are more-often considered in discussions of 'IT and law', few included actuators in 1985. An example is ATMs, which were mainstream before 1985, and ETF/POS, which was emergent at that time, and was first addressed in the journal in 1987. These, depending on the particular implementation, may involve carriages to ferry cards into and out of the device, and to dispense cash and receipts. Over time, some other public-facing computer-based systems incorporated actuators, such as toll-road and parking-station boom-gates integrated with payments systems.

This review of the technical aspects of information infrastructure in 1985 shows that multiple technologies were emerging, maturing and being over-run, in what observers of the time felt was something of a tumult (Toffler 1980). The impacts of these technologies on their users - and on other people entirely - came as something of a shock to them. At least at that stage, technology was driven by enthusiastic technologists and then shaped by entrepreneurs. Their designs often paid limited attention to feedback from users, and were formed with even less feedforward from the as-yet insufficiently-imaginative public as to what people would like information technology to be and do, and to not be and not do.


3.2 Psycho-Socio-Economic Factors in 1985

A key insight offered by the technosocial notion was the inter-dependence between technical and other factors. However, the driver of developments in the information society and economy of the mid-1980s was to a considerable extent the capabilities of new technologies. It was therefore appropriate to consider the technical aspects in the earlier section, and to now move to the other contextual factors, in order to document how the technologies were applied, and what impacts and implications they had. This sub-section begins by considering the social dimension, from the individual, via small groups to large groups, and then shifts to the economic and political dimensions.

(1) Individuals

In 1985, the main setting within which individuals interacted with IT was as employees within organisations. Given that the technology was expensive to develop and only moderately adaptable, and many workplaces were largely hierarchical, workers were largely technology-takers. This was to some extent ameliorated by the then mainstream 'waterfall' system development process, which featured a requirements analysis phase and, within some cultures, participative elements in design processes.

The prospect was emerging of impacts on individuals who were not themselves users of information infrastructure or participants in the relevant system. The term 'usees' is a useful descriptor for such people (Clarke 1992, Fischer-Hübner & Lindskog 2001, Baumer 2015). Examples include data subjects in data collections for criminal intelligence and investigation agencies, and in private sector records accumulated by credit bureaux and in insurance-claim and tenancy databases.

Considerable numbers of individuals were also beginning to interact with personal computing devices, originally as kits for the technically-inclined (1974), followed by Apple II and Commodore PET for the adventurous (1977), after which the TRS-80 and others (1980), the IBM PC (1981) and Apple Macintosh (1984) brought them into the mainstream. The substantial surge in individual activities was only at its very beginning in 1985, but expectations at the time were high.

The notion of 'cyberspace' had been introduced to sci-fi aficionados by Gibson (1983), and popularised in his best-selling novel 'Neuromancer' (1985). To the extent that a literary device can be reduced to a definition, its primary characteristic could be identified as 'an individual hallucination arising from human-with-machine-and-data experience'. No evidence has been found that there was any awareness of the notion in the then fairly small computers and law community in 1985. In the Web of Science collection, the first mention found is in 1989. The term was first explained to CLSR readers in 8, 2 (Birch & Buck 1992). Few references are found by Google Scholar even through the 1990s.

(2) Groups

The earliest forms of computer-mediated (human) communications (CMC) date to the very early 1970s, in the form of email and e-lists. The field that came to be known as computer-supported cooperative work (CSCW) was initiated by Hiltz & Turoff (1976), and their electronic information exchange system (EIES) was further developed and applied (Hiltz & Turoff 1981). The application of more sophisticated 'groupware' technology within group decision support systems (GDSS) was perceived as "a new frontier" as CLSR was launched (DeSanctis & Gallupe 1984), and was further researched and commercialised (Dennis et al. 1988, Vogel et al. 1990). For a broader view of collaborative work through computers and over distance, see Wigan (1989).

Research and resources were focussed on the workplace environment, and impacts elsewhere were more limited impact. Bulletin board systems (BBS) emerged in Chicago in 1978, Usenet at the University of North Carolina in 1979, and multi-user dungeons / domains (MUDs) at the University of Essex also in 1979. Internet relay chat (IRC) followed, out of Finland in 1988 (Rheingold 1994). However, only a very small proportion of the population used them, or had even heard of them. These technologies and their uses attracted attention in CLSR, with 'electronic mail' and 'e-mail' first mentioned in 1,1 (1985), BBS in 1987, newsgroups in 1989 (although Usenet not until 1995), 'email' in 1990, IRC in 1993 and MUDs in 2000. The exploitative world of 'social media' emerged in the form of 'social networking services' only in 2002-03 (Clarke 2004b), with the first mention in the journal in 2006.

(3) Communities

In 1985, the idea that information infrastructure could assist larger-scale groups to interact, coalesce or sustain their integrity was yet to emerge. The Whole Earth 'Lectronic Link (the Well) was just being established, and hence virtual or eCommunities and even virtual enhancements to existing communities were at best emergent. Electronic support for communities was limited to a few sciences. It had not even reached the computing professions let alone the less technically-capable participants in local, regional, ethnic, lingual and occupational associations.

(4) Societies

In 1985, only the very faintest glimmers of electronic support for whole societies was detectable. Among the constraints was the very slow emergence of convenient support for languages other than those that use the unadorned Roman alphabet. It is not easy to identify concrete examples of the information infrastructure of the mid-1980s having material impacts on whole societies. One somewhat speculative harbinger of change dates to shortly after CLSR's launch. The implosion of the Soviet Union in 1989, and with it the Communist era, can be explained in many ways. One factor was arguably that there was a need to take advantage of computing, and hence PCs were becoming more widely available. PCs enabled 'samizdat' - inexpensive composition, replicatoin and even distribution of dissident newsletters - which was inconsistent with the suppression of information that had enabled the Soviet system to survive. Among many alternative interpretations, however, is the possibility that photocopying, which developed from the 1950s onwards, was at least as significant a de-stablilising factor as PCs were.

(5) Economies

Even in 1985, the economic impact of information infrastructure had long been a primary focus. Given that this was, and has continued to be much-travelled territory, it is not heavily emphasised here. A couple of aspects are worth noting, however. EDI had begun enabling electronic transmission of post-transaction business-to-business (B2B) documentation, in the USA from the 1970s and in Europe by the mid-1980s. There was, however, no meaningful business-to-consumer (B2C) eCommerce until Internet infrastructure and web-commerce standards were in place. This was only achieved a decade after CLSR's launch, from the 4th quarter of 1994 (Clarke 2002).

At the time of CLSR's launch, a current issue in various jurisdictions was whether software was subject to copyright. It was held in only two countries that this was not the case - Australia and Chile. Early forms of physical and software protections for software were already evident in 1985, and were to become major issues mid-way through CLSR's first 3 years. In 1984-87, Stewart Brand encapsulated the argument as being that 'information wants to be free, it also wants to be expensive; that fundamental tension will not go away' (Clarke 2000). That conflict of interests has been evident ever since.

A further question current at the time was whether, and on what basis, software producers were, or should be, subject to liability for defects in their products and harm arising from their use (Clarke 1989a). In various jurisdictions, it took another two to three decades before any such consumer protections emerged.

At the level of macro-economics and the international order, IT was dominated by the USA and Europe. The USSR lagged due to its closed economy. Consumer electronics and motor vehicle manufacturing industries had been migrating to economies with lower labour costs, and Japan was particularly dynamic in these fields at the time. In 1982, Japan initiated an over-ambitious endeavour to catch up with the USA in IT design and manufacture, referred to as 'Fifth Generation Computer Systems'. Shortly before CLSR's launch, this had "spread fear in the United States that the Japanese were going to leapfrog the American computer industry" (Pollack 1992).

(6) Polities

The Internet's first decade of public availabilty was marked by an assumption that anonymity was easily achieved, typified by a widely-publicised cartoon published in 1993 that used the line 'on the Internet, nobody knows you're a dog'. Anonymity is perceived as a double-edged sword - undermining accountability, but also enabling the exposure of corruption and hypocrisy. However, computing technologies generally have considerable propensity to harm and even destroy both privacy and human freedoms more generally. Warnings have been continually sounded (e.g. Miller 1972, Rosenberg 1986, Clarke 1994b). This topic is further considered later.


4. 2018

The previous section has noted that the information infrastructure available at the beginning of CLSR's life was, in hindsight, still rather primitive and constraining. Its impacts and implications were accordingly of consequence, but still somewhat limited. Many of the topics that have attracted a great deal of attention in the journal during the last 35 years were barely even on the agenda in 1985.

This section shifts the focus forward to the present. The purpose to identify the key aspects of the sociotechnical context within which CLSR's authors ands readers are working, thinking and researching at the time of the 200th issue. The discussion addresses the technical aspects within the same broad structure and sequence as before. In dealing with the human factors, on the other hand, a different sequence is used.


4.1 Information Infrastructure in 2018

In each of the four segments used to categorise technologies, very substantial changes occurred during the last 35 years. Some of those changes are sufficiently recent, and may transpire to be sufficiently disruptive, that the technological context might be materially different again as early as 5-10 years from the time of writing. For the most part, however, this section concentrates on 'knowns', and leaves consideration of 'known unknowns' and speculation about 'unknown unknowns' for the closing sections.

(1) Devices

During the first 75 years of the digital world, 1945-2020, enterprise computing has been dominated successively by mainframes, mini-computers, rack-mounted micro-computers, desktop computers, portables and handhelds. During the period 2000-2015, however, a great many organisations progressively outsourced their larger devices, their processing and their vital data. A substantial proportion of the application software on which organisations depend is now developed, maintained and operated by other parties, and delivered from locations remote to the organisation in the form of (Application) Software as a Service (SaaS). Most outsourcing was undertaken without sufficiently careful evaluation of the benefits, costs and risks, and achieved cost-savings that were in many cases quite modest, in return for the loss of in-house expertise, far less control over their business, and substantially greater dependencies and vulnerabilities (Clarke 2010b, 2012a, 2013b). The situation in relation to the smaller IT devices on which organisations depend is somewhat more complex, and is discussed below.

Commencing in the mid-to-late 1990s, users of consumer devices have come to expect the availability of connectivity with prettymuch anyone and anything, anywhere in the world. Many consumer devices take advantage of wireless communications and lightweight batteries to operate untethered, and hence are mobile in the sense of being operable on the move rather than merely in different places at different times.

The conception of consumer devices has changed significantly since the launch of the Apple iPhone in 2007. This was designed as an 'appliance', and promoted to consumers primarily on the basis of its convenience. With that, however, comes supplier power over the customer. The general-purpose nature and extensibility of consumer computing devices during the three decades 1977-2007 has been reduced to a closed model that denies users and third party providers the ability to vary and extend its functions, except in the limited ways permitted by the provider. This quickly became an almost universal feature of phones and then of tablets, with Google's Android operating environment largely replicating the locked-down characteristic of Apple's iOS. At the beginning of 2018, the two environments comprehensively dominate the now-vast handheld arena, with 96% of phones and 98% of tablets.

A further aspect encourages suppliers to convert users from computers to mere appliances. During the period 2005-2015, there was a headlong rush by consumers away from the model that had been in place during the first three decades of consumer computing c. 1975-2005. Instead of running software products on their own devices, and retaining control over their own data, consumers have become heavily, and even entirely, dependent on remote ('cloud') services run by corporations, with their data stored elsewhere, and a thin sliver of software, cut down from an 'application' to an 'app', running on devices that they use, and that they nominally own, but that they do not control (Clarke 2011, 2017).

The functionality of most apps has become so limited that a large proportion of most devices' functions are unavailable when they have no Internet connection. This eventuality may be experienced only occasionally and briefly by city-dwellers, but frequently and for extended periods in remote, rural and even regional districts, and in mountainous and even merely hilly areas, in many countries. Many urban-living people in advanced economies currently lack both local knowledge and situational awareness and are entirely dependent on their handhelds for navigation. Similarly, event planning in both social and business contexts has become unfashionable, and meetings are negotiated in click-space. Hence many people whose devices fail them are suddenly isolated and dysfunctional, resulting in psychological and perhaps economic risks associated with social and professional non-performance, but also physical risks associated with place.

This dependency is not the only security issue inherent in consumer computing devices. Desktops, laptops and handhelds alike are subject to a very wide array of threats, including viruses, worms, trojans, cookies, web-bugs, ransomware and social engineering exploits such as phishing. Unfortunately, the devices also harbour a very wide array of vulnerabilities. Security has not been designed-in to either the hardware or the software, and many errors have been made in implementation, particularly of the software. The result is a great deal of unintended insecurity. Yet worse, suppliers have incorporated within the devices means whereby they and other organisations can gain access to data stored in them, and to some extent also to the functions that the devices perform. Examples of these features include disclosure by devices of stored data including about identity and location, auto-updating of software, default-execution of software, and backdoor entry. Consumers devices are insecure both unintentionally and by design (Clarke & Maurushat 2007, Clarke 2015). Further issues arise from the imposition of device identifiers, such as each handset's International Mobile Equipment Identity (IMEI) serial number, and the use of proxies such as the identifiers of chips embedded within devices, e.g. in Ethernet ports and telephone SIM-cards. One of the justifications for such measures is security. However, while some aspects of security might be served, other aspects of security, such as personal safety, may be seriously harmed.

The vast majority of employees of and contractors to organisations possess such devices. Organisations have to a considerable extent taken up the opportunity to invite their staff to 'bring your own device' (BYOD). This has been attractive to both parties, and hence organisations have been able to take advantage of cost-savings from the BYOD approach. The devices are not the only thing that staff have brought into the workplace. All of the insecurity, both unintended (such as malware) and designed-in (such as access by third parties), comes into the organisation with them, as do the vulnerabilities to withdrawal of service and loss of data. Although moderately reliable protection mechanisms are available, such as establishing protected enclaves within the device, and installing and using mobile device management (MDM) software, many organisations do not apply them, or apply them ineffectively.

It remains to be seen how quickly suppliers will reduce other form-factors to the level of appliances. The idea is attractive not only to suppliers, but also to governments and government agencies, and to corporations that desire tight control over users (such as content providers and consumer marketers). As with handheld market segments, operating system marketshare in the desktop and laptop arena is dominated by a duopoly, with Microsoft and Apple having 96% of the market at the beginning of 2018. Desktop and laptop operating environments are being migrated in the direction of those in handhelds, and a large proportion of users' data is stored outside their control. Hence all consumer devices and the functions that they are used for - including all of those used in organisational contexts through the now-mainstream BYOD approach - have a very high level of vulnerability to supplier manipulation and control.

Beyond the mainstream form-factors discussed above, there is a proliferation of computing and communications capabilities embedded in other objects, including white-goods such as toasters and refrigerators, but also bus-stops and rubbish-bins. Drones are crossing the boundary from expensive military devices to commercial tools and entertainment toys (Clarke 2014c). In addition, very small devices have emerged - in such roles as environmental monitors, drone swarms and smart dust - and their populations might rapidly become vast. The economic rationale is that, where large numbers of devices are deployed, they are individually expendable and hence can be manufactured very cheaply by avoiding excessive investment in quality, durability and security. Such 'eObjects' evidence a wide range of core and optional attributes (Manwaring & Clarke 2015). They are promoted as having important data collection roles to play in such contexts as smart homes and smart cities. Meanwhile, in 2018, motor vehicles bristle with apparatus that gathers data and sends it elsewhere, and the prospect of driverless cars looms large.

The software development landscape is very different in 2018 compared with 1985. The careful, engineering approach associated with the 'waterfall' development process was over-run from about 1990 onwards by various techniques that glorified the 'quick and dirty' approach. These used fashionable names such as Rapid Application Development and Agile Development, and were associated with slogans such as 'permanent beta' and 'move fast and break things'. With these mind-sets, requirements analysis is abbreviated, user involvement in design is curtailed, and quality is compromised; but the apparent cost of development is lower. In parallel, however, 'development' has increasingly given way to 'integration', with pre-written packages and libraries variously combined and adapted, as a means of further reducing cost, although necessarily at the expense of functionality-fit and reliability.

A final aspect that needs to be considered is that a major change in the transparency of application software. Early-generation development tools - machine-language, assembler, procedural and 4th generation (4GL) languages - were reasonably transparent, in the sense that they embodied a model of a problem and/or solution that was accessible and auditable, provided that suitable technical expertise was available. Later-generation approaches, on the other hand, do not have that characteristic. Expert systems model a problem-domain, and an explanation of how the model applies to each new instance may or may not be available. Even greater challenges arise with the most highly abstracted development techniques, such as neural networks (Clarke 1991, Knight 2017). Calls for "transparency, responsible engagement and ... pro-social contributions" are marginalised rather than mainstream (Baxter & Sommerville 2011, quotation from Diaconescu & Pitt 2017 p.63).

(2) Communications

At the level of transmission infrastructure, in many countries the network backbone or core was converted to fibre-optic cables remarkably rapidly c. 1995-2005. This has coped very well with massively-increased traffic volumes, although locations that are remote and thinly-populated are both less well-served and lack redundancy. The 'tail-ends', from Internet Service Providers (ISPs) to human users and other terminal devices have been, and remain, more problematical. For fixed-location devices, even in urban areas, many countries have regarded fibre-optic as too expensive for 'last-mile' installation, leaving lower-capacity twisted-pair copper and coaxial cable connections as the primary tail-end technologies. In less densely populated regional, rural and remote areas, cable-connection is less common, and instead fixed wireless and satellite links are used.

Within premises, Ethernet cable-connections remain, but Wifi has taken over much of the space, particularly in domestic and small business settings. Cellular networks offer what is in effect a wireless wide-area tail-end, and provide 'backhaul' connections to the Internet backbone. Traffic on these sub-networks is already very substantial, and if 5G promises are delivered, it will grow much further. The diversity is very healthy, but the balancing of competition and innovation, on the one hand, against coherent and reliable management and stable services, on the other, becomes progressively more challenging.

The Internet family of protocols utilises the transmission infrastructure to carry messages that deliver services to end-points. The brief description here seeks to simplify, but to avoid the common error of blurring the distinctions between devices and networks, and among applications, protocols and services. Figure 1 offers a much-simplified depiction of the present arrangements. A wide range of services run over the established lower-layer protocols. The Web's http protocol continues to dominate traffic involving people, although from c. 2005 onwards in the much-changed form popularly referred to as Web 2.0 (Clarke 2008a). However, the proportion of circumstances is increasing in which Internet communication involves no human being on either end, and instead the link is machine-to-machine (M2M).

Figure 1: The Internet of 2018

Figure 2 endeavours to accommodate a number of changes that appear to be in train at present, with several additional infrastructural patterns appearing likely to expand beyond their beachheads. Mesh networks at the physical level, and peer-to-peer networking at the logical level, in many cases operates outside the control of corporations and government agencies. As the volume and diversity of M2M communications increase, alternatives to the intermediate-layer protocols may become significant. No serious alternative to the engine-room protocol, IP, is currently evident, however. The 35-year history of the original version of the protocol (IPv4) continues unabated, with IPv6 having made only limited inroads since its launch c. 2000. Developments in cellular services and IoT, however, may soon lift it markedly beyond its current c. 5% share of traffic.

Figure 2: Internetworking Post-2018

The openness of Internet architecture has created potentials that have been applied to many purposes. Some parties have perceived some of those applications as being harmful to their interests. These include large corporations, such as those that achieve superprofits from copyrighted works, and political regimes that place low value on political and social freedoms and focus instead on the protection of existing sources of political power. Oppressive governments and corporations seek to constrain performance and to impose identification and tracking, in order to facilitate their management of human behaviour. Some have also sought to influence the engineering Standards that define the Internet's architecture, and to exercise direct control over Internet infrastructure, at least within their own jurisdictions.

(3) Data

Many different technologies are needed to create digital data to represent the many relevant forms of phenomens and activities. By the end of the 20th century, most of the those technologies already existed. The most challenging and hence slowest to mature were the capture of snapshots in three dimensions (3D), and the rendering of 3D objects, such as sculptures and industrial artefacts. Data formats are now mature, and interoperability among systems is reasonably reliable. DBMS technologies are also mature, and the greatest challenges are now in achieving flexibility and adaptability as needs change.

In 2018, for documents, sound, images and video, 'born-digital' has become the default. Financial and social transactions result in auto-capture of data as a byproduct. In addition, a great deal of data about 'the quantified self' emanates from 'wellness devices' and other forms of 'wearables' (Lupton 2016). In the B2C and government-to-citizen (G2C) segments, much of the effort and cost of text-data capture has been transferred from organisations to individuals, in the form of self-service, particularly through web-forms. In addition, as noted earlier, consumer devices are promiscuous and pass large amounts of data to device suppliers and a wide assortment of other organisations.

These various threads of development have resulted in a massive increase in the intensity of data collection. That has been coupled with a switch from data destruction to data retention, as a result of cheap bulk storage and hence lower costs to retain than to selectively and progressively delete. Data discovery has become well-supported by centralised indexes and widely-available search-capabilities. Data access and data disclosure have been facilitated by the negotiation and widespread adoption of formalised Standards for data formatting and transmission. These factors have combined to make organisations heavily data-dependent, with the resulting complex usefully referred to as the digital surveillance economy (Clarke 2018b). This is discussed further below.

Governments also have a great deal of interest in acquiring data, and may apply many of the techniques used by consumer marketing corporations. During the preparation of this article, for example, debate raged about the scope for election campaigns to be manipulated by carefully-timed and -framed messages targeted by means of psychometric profiles developed by marketing services organisations. Government agencies may have greater ability than corporations to impose identifiers on individuals, and to force their use, e.g. as a condition of delivering services (Clarke 2006a). The scope also exists for data to be collected from surveillance mechanisms such as closed-circuit TV (CCTV), particularly if coupled with effective facial recognition systems, and automated number plate recognition (ANPR).

The imposition of biometric identification is creeping out from its original uses in the context of criminal investigation, and into visa-application, border-crossing, employment and even schools. Uses of data and identifiers in service delivery are being complemented by uses in denial of service. Techniques imposed on refugees in Afghanistan (Farraj 2010) may find their way into schemes in India, where the highly contentious Aadhar biometric scheme has been imposed on a billion people (Rao & Greenleaf 2013), and the PRC, where a 'social credit' system has recenty been deployed as a data-driven and semi-automated basis for social control (Chen & Cheung 2017, Needham 2018).

(4) Actuator Technologies

SCADA traffic not only carries data about such vital services as water supply, energy generation and supply, and traffic control - in air, sea, rail and road contexts - but also instructions that control those operations. Since c.2000, SCADA traffic has migrated from physical private networks onto the open Internet. The Internet's inherent insecurity gives rise to the risks of data meddling and the compromise of control systems. Destructive interference has been documented in relation to, for example, sewage processing and industrial centrifuges used in nuclear enrichment; and governments as diverse as North Korea, the PRC, Russia, Israel and the USA have been accused of having and deploying considerable expertise in these areas.

Robotics was for many decades largely confined to factories and warehouses, plus eternal, entertaining but ineffective and harmless exhibitions of humanoid devices purporting to be household workers and playthings. That may be changing, and it might do so quite rapidly. Drones already, and necessarily, operate with at least a limited degree of autonomy. Within-car IT has expanded well beyond fine-tuning the mix entering the cylinders of combustion engines. Robotics in public places is becoming familiar in the form of driverless car experiments, with early deployments emergent.

Another context in which the application of actuator technologies has expanded is in 'smart homes'. Garage doors operated from handsets nearby have been joined by remote control over heating and cooling systems, ovens and curtains. In some homes, switching lights on and off, even from within the premises, is dependent on connectivity with, and appropriate functioning of, services somewhere in the cloud. Many 'smart city' projects also involve not just data collection but also elements that perform actions in the real world.

The last frontier in the digitisation revolution has been 3D rendering. This involves building up layers of material in a manner similar to inking paper. In principle, this enables the manufacture of artefacts, or at least of components, or at the very least of prototypes. Longstanding computerised design capabilities, in particular AutoCAD, may thereby be integrated with relatively inexpensive 'additive manufacturing' (AM). As with all technological innovations, challenges abound (Turney 2018), but the impacts might in due course be highly disruptive in some sectors.

Brief, and necessarily incomplete though this review has been, it depicts a technological landscape that is vastly different from that of 1985. Information infrastructure is now vastly more pervasive, and far more powerful, and involves a far greater degree of integration among its elements than was the case 30 years ago. It is far more tightly integrated with and far more intrusive into organisational operations and human behaviour, and its impacts and implications are far greater.


4.2 Psycho-Socio-Economic Factors in 2018

This section identifies key aspects of the softer elements within the socio-technical arena. It uses the same structure as the description of the 1985 context, but presents the elements in a different sequence. During the early years of the open public Internet, there was a pervasive attitude that individual and social needs would be well-served by the new infrastructure. However, that early optimism quickly proved to be naive (Clarke 1999b, 1999c, 2001). The primary drivers that are evident in 2018 are not individual needs, but rather the political and economic interests of powerful organisations. This section accordingly first deals with the polities, then the economies, and only then returns to the individual, group, community and society aspects.

(1) Polities

During the decade following CLSR's launch, a 'cyberspace ethos' was evident (Clarke 2004a), and considerable optimism existed about how the Internet would lead to more open government. The most extreme claims were of a libertarian 'electronic frontier', typified by John Perry Barlow's 'A Declaration of the Independence of Cyberspace' (Barlow 1996).

On the other hand, the rich potential of information technology for authoritarian use was already apparent (e.g. Miller 1972, Clarke 1994b). The notion of 'surveillance society' emerged in the years following CLSR's foundation (Weingarten 1988, Lyon 2001, Lyon 2004). The multiple technologies discussed above have been moulded into infrastructure that goes beyond 'society' and that enables the longstanding idea of an 'information state' (Higgs 2003) to be upgraded to the notion of the 'surveillance state' (Balkin & Levinson 2006, Harris 2010). This entails intensive and more or less ubiquitous monitoring of individuals by government agencies. During recent decades, governments have been increasingly presumptuous about what can be collected, analysed, retained, disclosed, re-used and re-analysed, and currently the 'open data' mantra is being used to outflank data protection laws. Further, in even the most 'free' nations, national security agencies have utilised the opportunity presented by terrorist activities to exploit IT in order to achieve substantial compromises to civil liberties. It is impossible to overlook the symbolism of John Perry Barlow finally becoming a full member of 'The Grateful Dead' while this article was being researched and written.

Technologies initially enabled improvements in the efficiency of surveillance, by augmenting physical surveillance of the person, and then enabling dataveillance of the digital persona (Clarke 1988). However, the surveillance state and surveillance economy have moved far beyond data alone. The digitisation of human communications has been accompanied by automated surveillance of all forms of messaging. More recently, the observation and video-recording of physical spaces has been developing into comprehensive behavioural surveillance of the individuals who operate in and pass through those spaces. Meanwhile, the migration of information-searching, reading and viewing from physical into electronic environments has enabled the surveillance of individuals' experiences. The term 'überveillance' has been coined to encapsulate the omnipresence of surveillance, the consolidation of data from multiple forms of surveillance, and the increasingly intrusive and even embedded forms of surveillance (Michael & Michael 2006, Michael & Michael 2007, Clarke 2007, Wigan 2014a).

Meanwhile, there are ongoing battles for control over the design of the Internet's architecture, protocols and infrastructural elements, and over operational aspects of the Internet. This has been attempted through such venues as the Internet Corporation for Assigned Names and Numbers' Government Advisory Committee (ICANN/GAC) and the International Telecommunication Union (ITU), with the Internet Governance Forum (IGF) offered as a sop to social interests. To date, the Internet Engineering Task Force (IETF), provided with an organisational home by the Internet Society (ISOC), has remained in control of the engineering of protocols, and ICANN remains the hub for operational coordination.

Within individual countries, however, national governments have interfered with the underlying telecommunications architecture and infrastructure, and with the operations of Internet backbone operators and ISPs. Although repressive regimes are particularly active in these areas, governments in the nominally 'free world' are very active as well. Specific interventions into information architecture and infrastructure can be usefully divided into three categories:

Although the UK led developments in the very early years of computing technology, its industry was very quickly outpaced by the energy of US corporations. During the period c. 1950-2020, the US has led the world in most market segments of IT goods and services, with Europe a distant second. A current development that increases the likelihood of authoritarianism becoming embedded in information architecture and infrastructure is the emergence of the PRC as a major force in IT. It is already strongly represented in the strategically core element of backbone routers (Cherry 2005).

Many such measures are, however, challenging to make effective, and most are subject to countermeasures. That has led to something of an 'arms war'. For example, the use of cryptography to mask traffic-content has given rise to government endeavours to ban strong cryptography or require backdoor entrance for investigators. Similarly, obfuscation of the source and destination of traffic through the use of virtual private networks (VPNs), proxy-servers, and chains of proxy-servers (such as Tor) has stimulated bans and counter-countermeasures. All of these activities of course give rise to a wide array of IT law and security issues.

There is scope for political resistance, and possibly even for resilient democracy (Clarke 2016b), e.g. through mesh networks, and social media - channels that were much-used during the Arab Spring, c. 2010-12 (Agarwal et al. 2012). In all cases, it is vital that protections exist for activists' identities (Clarke 2008b) and locations (Clarke 2001a, Clarke & Wigan 2011). However, technologies to support resistance, such as privacy-enhancing technlogies (PETs) have a dismal record (Clarke 2014a, 2016e). The efforts of powerful and well-resourced government agencies, once they are focussed on a particular quarry, appear unlikely to be withstood by individual activists or even by suitably-advised advocacy organisations.

Another aspect that represents a threat to democracy has been the adaptation of demagoguery from the eras of mass rallies and then of mass media, to the information era. This in some respects despite, but in other ways perhaps because of, the potentials that information infrastructure embodies (Fuchs 2018).

Perhaps a larger constraint on government repression than resistance by people may be the ongoing growth in the power of corporations, and the gradual maturation of government outsourcing, currently to a level referred to as 'public-private partnerships'. In the military sector, activities that are increasingly being sub-contracted out include logistics, the installation of infrastructure, the defence of infrastructure and personnel, and interrogation - although perhaps not yet the actual conduct of warfare. The pejorative term 'mercenaries' has been replaced by an expression that is presumably intended to give the role a positive gloss: 'private military corporations'. The relationship between the public and private sectors could be moving towards what Schmidt & Cohen (2014) enthusiastically proposed: a high-tech, corporatised-government State. The organisational capacity of many governments to exercise control over a country's inhabitants may wane, even as the technical capacity to do so rises to a crescendo. However, corporations' interests and capabilities may take up where governments leave off.

(2) Economies

Moore's Law on the growth of transistor density and hence processor power may drive developments in some computationally-intensive applications for a few years yet, but it has become less relevant to most other uses of IT. Other factors that have been of much greater significance in the scaling up of IT's impacts include the considerable reductions in the costs of devices, their diversity and improved fit to different contexts of use, substantial increases in their capacity to collect, to analyse, to store, and to apply data, greatly increased connectivity, and the resulting pervasiveness of connected devices and embedment of data-intensive IT applications in both economic and social activities. This sub-section deals with a number of key developments, beginning with micro considerations and moving on towards macro factors.

Disruptive technologies have wrought change in a wide variety of service industries, both informational and physical in nature. Currently, some segments of secondary industry are also facing the possibility of turmoil, due to the emergence of additive manufacuring technologies. Initiatives such as MakerSpaces are exposing people to technologies that appear to have the capacity to significantly lower the cost of entry, enable mass-customisation, and spawn regionalised micro-manufacturing industries (Wigan 2014b).

During the 1990s, it was widely anticipated that the market reach provided by the Internet would tend to result in disintermediation, and that existing and intending new intermediaries would need to be fleet of foot to sustain themselves (McCubbrey 1999, Chircu & Kauffman 1999). In fact, the complexity and diversity that have come with virtualisation have seen an increase in outsourcing and hence more, larger and more powerful intermediaries.

The virtualisation of both servers and organisations has led to 'cloudsourcing', whereby devices within the organisation are largely limited to the capabilities necessary to reach remote services (Clarke 2010b). Organisations have gained convenience and to some extent cost reductions, in return for substantial dependence on complex information infrastructure and a flotilla of service-providers, many unknown to and out of reach of the enterprise itself. This has given rise to considerable risks to the organisation and its data (Clarke 2013b). At consumer level, a rapid switch occurred from the application-on-own device model, which held sway 1975-2005, to dependence on remote services, which has dominated since c. 2010 (Clarke 2011).

As the term 'the information economy' indicated (Drucker 1968, Porat 1977, Tapscott 1996), there are many industries in which data, in its basic, specialised and derivative forms, is enormously valuable to corporations. This is evident with software and data, with assemblies of data, and even with organic chemistry in such contexts as pharmaceutical products, plant breeder rights and the human genome. There are no 'property rights in data', but there are many kinds of 'rights in relation to data' (Wigan 1991, Kemp et al. 2011), and these provide owners of those rights with opportunities to exploit their monopolies. Corporations have been desperate to sustain those monopolies, and eager to extend them and to create new ones. Strong-arm tactics by primarily US software corporations ensured that the 1980s decisions of Australian and Chilean courts that copyright did not subsist in software were promptly over-ruled by the legislatures. That set the scene for the tension that Stewart Brand had identified to be resolved almost entirely in favour of 'expensive', with many attempts to implement technological protection for copyright objects (Clarke & Nees 2000), expansion in monopoly rights in software and data, the criminalisation of what had historically been the civil law transgression of copyright breach, and the transfer of some of the costs of defending corporations' monopoly rights from those corporations to the public. The US has successfully pursued a prolonged international campaign on behalf of its corporations, manipulating bilateral and multilateral arrangements to such an extent that corporate dominance over all forms of data might already be entrenched in customary international law (Okediji 2001).

The notion of the digital surveillance economy was introduced earlier. Voraciousness for data, and dependence on it, are features of private sector as well as public sector organisations. The use of remote services enables people to use multiple devices to perform the same function and access the same data. This convenience factor, however, has enabled the emergence of a highly exploitative corporate business model, popularly characterised by the aphorism 'If you're not paying, you're the product'. User convenience has become both a science and an art-form, with hedonism minutely studied, and features and interfaces designed so as to be appealing to people, and to stimulate compulsive behaviour. For example, features that create excitement among game-players have been applied in a way referred to as 'gamification'. Consumer-facing corporations contrive to acquire vast volumes of data associated with individuals. They then exploit the data using data analytics techniques (Pasquale 2015). The term 'surveillance economy' usefully encapsulates the notion (Davies 1997, Hirst 2013, Andrejevic 2014, Clarke 2018b). A broader idea of 'surveillance capitalism' has also been conceptualised (Zuboff 2015, 2016).

The World Wide Web that was largely responsible for drawing consumers to the Internet was designed as a means of serving users' needs. From 1990 until c. 2005, marketing organisations struggled to subvert the consumer-friendly service into one that served corporations' interests (Clarke 1999b). For the digital surveillance economy to emerge and flourish, it has been necessary for corporations to achieve compromise to, and re-engineering of, the information infrastructure. Web 2.0 was developed in order to invert the service from its original form into a corporation-driven, consumer-hostile scheme (Clarke 2008a). Organisations acquire user and device identities, and proxies for them, such as IP-addresses, email-addresses, content planted on the devices (e.g. by means of cookies and web-bugs), 'signatures' of client-software such as web-browsers, and in some cases location data. A great deal of data is consensually acquired through dealings with individuals, and a great deal more is acquired surreptitiously. The harvest from the information infrastructure is then merged with existing data-holdings, and with additional data acquired legitimately or otherwise from other organisations through a tangled network of 'data brokers' (Christl & Spiekermann 2016). Rather than managing relationships with people, corporations have become distanced from them, and instead manage digital personae (Clarke 1994a, 2014b).

Consumer marketers are heavily dependent on the existence and vitality of an extended network of other corporations. They therefore judiciously open up devices and data to enable and encourage the emergence of additional capabilities and services that promote their own business interests. Hence there is a strong incentive for contemporary devices' designed-in insecurity to be sustained and enhanced, in order to provide generous data flows within each supplier's substantial network of strategic partners.

A long tradition has existed whereby parliaments legislate to achieve balances among the interests of different categories of entities, through various mechanisms, most relevantly in this context consumer rights and data protection regulatory regimes. The tradition of achieving balance has, however, been severely challenged by the growth in corporate size and power during the last 75 years, such that many corporations are now larger than most nation-states. Trans-national corporations are achieving the status of supra-national corporations and taking advantage of virtualisation of their activities, e.g. by engaging in regulatory arbitrage (Fleischer 2010). The result is that national jurisdictions are increasingly unable to impose the rule of law on them. This is as evident in consumer rights and privacy as it is in intellectual property law and practice.

Moving beyond micro- to macro-economics, another expectation of Internet-based commerce was that the considerable increases in market reach and market depth would shift economic activity to areas with lower costs, particularly of labour. Prices would be driven down to the limit of viability - and frequently beyond, as smaller and less capable suppliers drift into and out of markets. This digital disruption postulate appears more credible than that relating to disintermediation has proven to be. Such industry segments as data capture and support ('call-centres') have long since been displaced to lower-cost areas, and some services requiring higher levels of technical expertise and intellectual capacity have followed, such as programming, but also some kinds of drafting, engineering and design, and small-scale tasking in, for example research, writing, indexing and book-keeping.

Impacts have been apparent even within localised service industries such as package-logistics, with corporations' transport divisions outsourced to couriers, the couriers virtualised, and employment relationships replaced by casual contracts. It appears as if technology has legitimated the 'neo-liberal' desires for deregulation of labour markets. Human-logistics is undergoing the same transition, with Uber and look-alikes replacing taxi-services and to some extent personal cars. Governments have panicked in the face of Uber fashion, suddenly disestablished the substantial regulatory frameworks that had grown up around taxis, and thrown drivers and their clients alike to the wolves. Whether the hitherto normal process of re-regulation will occur is in some doubt, because of legislatures' unwillingness to impose constraints on large and seemingly powerful corporations. Similar disruption may shortly arise in secondary industries, to the extent that additive manufacturing supports micro-industry. If many suppliers are available, then massive competition can be expected to result in low-cost strategies completely swamping long-term relationships with suppliers, dual-sourcing, and quality-assurance mechanisms.

A common feature of such industry structures is the displacement of employees into contractual relationships and 'self-employment', much of it in the form of poorly-paid under-employment and multiple casual engagements. The term 'the gig economy' highlights the need for people to work multiple jobs in order to achieve sufficient income, but it camouflages the switch from payment for time spent working back to remuneration based on completed tasks. There are serious doubts about the capacity of a large proportion of the workforce to function effectively within the gig economy, because it demands self-confidence, self-discipline, many areas of selling and administrative expertise, and emotional resilience. For most of the last 100 years, piecework and Taylorism were regarded as being excessively manipulative of workers; but the information infrastructure has enabled a strong resurgence in such arrangements.

(3) Individuals

The socio-technical aspects relating to polities and economies were considered first, because of the enormity of contemporary information infrastructure's impacts and implications in those areas. However, the social dimension is confronted by some further significant challenges.

Although the original conceptualision of cyberspace was based on visualisation of vast stores and torrents of data, an additional and early image was of cyberspace as a locus of human and inter-human behaviour. Many individuals have taken advantage of the freedoms from conventional constraints, and indulge in role-playing, adoption of alternative personae, and reduced social inhibitions. Along with its positive aspects, this has given rise to a range of behaviours up to and including the psycho- and sociopathic.

From a positive perspective, a great many capabilities have been made available to individuals, in convenient forms, and apparently very inexpensively or gratis. However, many products, particularly remote cloud services, are structured in such a manner that individuals are entirely dependent on suppliers. This dependence is masked, because the terms of service and privacy policy statements are complex, and generally left unread, but are contrived so as to create the legal illusion of customer consent. Beyond the abuse of trust in relation to personal data, individuals face vulnerabilities in the case of events such as corporate failure and changes in corporate priorities. They may also be subject to price-gouging, which leads some to cancel their subscription, thereby cutting themselves off from both the services and their own data.

Computing technologies have proven to be very involving for some kinds of psyches, to the point that they generate obsessive behaviour in some people, and disconnection from physical phenomena 'in real life' / 'meatspace'. In addition to increased social distance from other individuals, a recent example is people with eyes glued to the screens of their mobile phones, fingers tapping away, oblivious to the traffic around them as they cross the road.

Such phenomena have been exacerbated since the emergence of the digital surveillance economy post-2005. This involves strong emphasis on interface and functionality design specifically to appeal to consumers' hedonism. These marketing activities utilise expertise in the stimulation and exploitation of obsessional and compulsive behaviour in individual consumers. A proportion of consumers appear to be rendered helpless by these techniques.

Another positive aspect of the current circumstances is the extensibility of many capabilities, and their applicability to functions that the designers did not originally have in mind. These are referred to in some literatures as 'affordances' (meaning 'actionable properties' - Gibson JJ1977, Norman 1999), or in some interpretations of that term, 'unintended affordances'. As William Gibson's less intellectualist and more grounded approach has it, 'the street finds its uses for things'. The effect of this is that considerable scope exists for individuals to utilise products and services for their own purposes, and to sidestep or subvert the endeavours of organisations to exercise control over them. Evidence of this is seen in the ongoing consumer abuse of copyright materials - which is to a considerable extent a reaction against supplier abuse of their monopoly power over content. People also use the capabilities to protect their personal interests through the obfuscation and falsification of data, messages, identities, locations and social networks (Clarke 2016b), and the avoidance, distortion, blocking and breaking of surveillance (Schneier 2015a, 2015b).

A further concern from the perspective of individuals is that, because so many services are driven entirely by marketing interests, minorities are only likely to be served to the extent that they are both visible to marketers and perceived as a strategically attractive market segment. One line of analysis is by means of the marketing construct of 'generations' (Baby Boomer, GenX, GenY, iGen), i.e. based on current age, and hence birth-period and/or formative period. However, it appears that marketers have tended to focus more on psychographic classifications, such as the 'Big Five' phenotypes of extraversion, neuroticism, agreeableness, conscientiousness, and openness (Goldberg 1990, Gunter & Furnham 1992, Goldberg 1999). There are many other categorisations that are of greater relevance from a public policy perspective, such as lingual groupings, ethnic origins, socio-economic characteristics, educational level, IT-capability, and 'disability'. The impacts of information infrastructure on minorities give rise to many IT law and security issues that are far too seldom the subject of research.

(4) Groups

A further conceptualisation of cyberspace is relevant to groups of individuals. Gibson's explanation of the notion of cyberspace referred to a "shared hallucination" arising from commonalities among the IT-mediated experiences of multiple people. Whether seen in such artistic terms or more pragmatically, information infrastructure has been a boon to people separated by space and time-zone. This includes people whose relatives and friends live elsewhere, and people who are travelling. There is also substantial use by people who spend a considerable amount of their time in close proximity. The benefits could have been delivered by means of tools designed to suit people's needs and to implement international standards in order to ensure interoperability. Instead, in 2018, the primary communication tools used by most people have been devised with the primary purpose of providing substantial competitive advantages to service-providers, thereby compromising benefits to users.

As was apparent from the CSCW and GDSS movements in corporate contexts, human communications occur in many forms, and tools for human communications need to reflect that variety. Communications may be synchronous or asynchronous. Communications may use text, text plus images, voice or video. Communications may involve one-to-one and/or one-with-many configurations. Communications may be primarily or even wholly one-way, or involve a great deal of interaction, such as collaborative authoring and editing of text, and multi-person write-access to a diagram-drawing facility. There are a great many variants of email, chat, voice-phone, voice-conferencing, video-phone and video-conferencing. But there are also a great many barriers between text-messaging systems, and a lack of interoperability among the many proprietary voice and video facilities.

Electronic communications enable much greater breadth, depth and open-endedness in each person's social network. This can be highly valuable, particularly in economic terms for people with the skills to exploit their contacts. However, in the social sphere, these same features can give rise to a dilution of the intensity of relationships with genuine friends, who may wallow amongst an ocean of 'friends'.

Among people who are in close physical proximity, a considerable range of social constraints provide moderately reliable regulation of inter-personal behaviour. The detachment from other people that comes with cyberspace activities has the effect of dulling, diluting and even entirely negating those constraints. Excesses and abuses arose in the earliest electronic communication media, and netet(h)iquette was an early topic of discussion (Clarke 1995, 1999a).

The compulsive nature of contemporary social media, its immediacy and pervasiveness, and the importance attached to it by some people, particularly some children and many adolescents, has resulted in the intensification of a range of netethiquette issues. The grooming of young children by sexual predators and 'cyber-bullying' have recently attracted a great deal of attention. Design features of social media appear to reinforce what in meatspace would be anti-social behaviours, rather than replicating existing social control mechanisms or substituting alternatives.

Another problem from the viewpoint of information infrastructure's support for groups is that the desire of social media service-providers to appropriate and propertise content leads to a 'walled garden' approach. For example, postings and messages on Facebook, LinkedIn, Google and DropBox are difficult and may even be impossible to view without becoming a member, logging in, and exposing oneself to the same surveillance and exploitation as devotees choose to be subject to. The tendency to exclude people who have 'reasons to hide', or are merely circumspect, extends to paid-subscription services as well. Some service-providers, such as ancestry.com, compound the felony, by attracting people to donate personal data relating to themselves and others, and then adopting the conventional but dubiously legal marketing technique of bait, lock-in, then switch (Cleland 2011), resulting in people ceasing to have access to their own data.

The contemporary social media model is detrimental to the dynamics of groups in several other ways. One issue is that the closedness of services leads to group fragmentation, because social media remains a competitive space, and group-members commonly have different preferences in posting, messaging and interaction services. A second is the marginalisation of those who prefer to avoid entrapment. A third is that the porousness, unreliability and unknowability of the extent to which content is accessible creates dangers for the many categories of persons-at-risk. As a result, some people, such as victims of domestic violence, are actively advised to remove their existing personae from social media, thereby both preventing them from making contributions to the groups they have associated with and increasing their social isolation.

The Facebook / Kogan / Cambridge Analytica scandal broke while this article was being finalised (Cadwalladr & Graham-Harrison 2018). Prominent though it appeared to be, it was merely another in a long series of events that have highlighted the sociopathic design and behaviour of commercial social media services, psychographics researchers, and social media data analytics enterprises.

(5) Communities

Communities are usefully differentiated from groups in that they are of larger scale and hence the relationships among individual members are likely to be more remote, and different kinds of social dynamics are likely to arise. Communities come in many forms, based on such factors as geography, interests and occupations. Some pre-existed the Internet, or exist partly or entirely independently of the Internet, whereas others have formed partly or even wholly as a result of it.

It was widely anticipated that existing communities would be greatly advantaged by information infrastructure, and that many new communities would arise because of it. Many studies of localised community informatics have been undertaken (Williams et al. 2013). With qualifications, it appears that considerable benefits have indeed been extracted. One qualification, for example, is that many communities associated with local government areas have found that the early hope of much-improved engagement between government employees and the local public has not eventuated, and that institutional distance and a substantial degree of authoritarianism remain.

A variety of open-source tools is available, and various applications and services have been developed by not-for-profit organisations, specifically for communities that are significantly virtual in nature. Despite this, commercial social media services dominate, giving rise to the same problems as were discussed above in relation to groups - incompatibilities, lack of interoperability, marginalisation and exclusion. The closing words of Howard Rheingold's 'The Virtual Community' ring true: "The late 1990s may eventually be seen in retrospect as a narrow window of historical opportunity, when people either acted or failed to act effectively to retain control over communications technologies ... What happens next is largely up to us:" (1996, p. 300). Although that opportunity was not entirely missed, power has been ceded to commercial social media service-providers, which have proven to be rapacious.

(6) Societies

There was also a period of great optimism in relation to the social impacts of the Internet, and the prospects of a beneficial 'information society' (Masuda 1981). Over time, the less rosy aspects of the notion became better understood (Beniger 2009). Among the more specific examples of potential positive impacts was the prospect of dying languages and cultures surviving, through open access to relevant textual, image, video and sound content, and linkages among diaspora. As with smaller-scale groups and communities, some such societies successfully apply open-source and non-commercial, service-oriented tools to their needs. Others work within the constraints imposed by social media services that are designed with altogether different objectives in mind from those of societies.

Anti-social behaviour naturally afflicts societies' use of electronic communications. All forms of it, but particularly large-scale social media services, act as megaphones for individuals indulging in rumour-mongering, bigotry, vitriole and incitement.

The field of social informatics (Kling 1999) confronts the challenge of assessing whether societies are sustaining themselves more effectively as a result of contemporary information infrastructure, and if so, which kinds of societies under which conditions. Are ethnic, lingual and other diaspora retaining cultural integrity that they would have otherwise lost? Are minority languages flourishing, or at least surviving?

The last of these questions, at least, has been the subject of some study and discussion. Kornai (2013) was highly pessimistic, entitling his article 'Digital Language Death'. Similarly, Temperton (2015) reported that even the two leading websites in terms of multi-lingual support, Google Search and Wikipedia, only support c.300 of the world's 7,000 languages, with all other sites trailing a long way behind. Very few of these are the 6,500 languages spoken by fewer than 100,000 people are directly supported - and, in any case, the Internet may be of little assistance to the 2,500 endangered languages, whose users are mainly in village societies.


5. Implications

The previous sections have identified what appear to the authors to be the most pertinent factors, and noted their first-round impacts, with some discussion of their second- and later-round implications. The analysis was structured into technical and other segments, with relevant factors considered under four and six sub-headings respectively. This section confronts the challenge of re-combining the elements and drawing some relevant inferences.

The authors have adopted as their primary concern the implications these factors have for people, societies, economies and polities, with organisations and governments treated as vehicles for fulfilling the public's needs rather than as ends in themselves. We do so recognising that the cynical reader might regard us as being in denial of the reality that the brief and ever-stuttering flowering of individualism is on the wane, and that the power of corporate, governmental and 'public-private' institutions is ushering in a new era of authoritarianism.

The implications identified below are framed with the intention that the editor, reviewers, authors and readers recognise these implications as a set of challenges to stimulate, conceive and conduct research projects, and to submit for publication articles that address the critical issues for the futures not just of business, government, regulatory mechanisms and the law, but also of people, societies, economies and polities.

5.1 Infrastructural Resilience

In 1985, computer security and data security were merely emergent concerns. The explosion in computing capabilities, the merger with first communications and later actuator technologies, the increased delegation of decision-making to IT artefacts and the reducing transparency of decisions, have ushered in a vastly greater, and far more impactful, suite of security issues. Security can no longer be viewed in the naively narrow senses of what computer scientists have called 'confidentiality', integrity and authentication (the 'CIA' model). Security is a nested concept, which must be assessed at all levels, and from the perspectives of all stakeholders (Clarke 2013a).

Although much is written about security threats, there is far too little focus on the many categories of infrastructure vulnerability, which are the means whereby the threats give rise to harm. Vulnerabilities such as the following demand a great deal more attention from CLSR authors than they have received to date:

5.2 Consumer Rights

The ongoing collapse of consumer rights under the onslaught of marketing corporation power needs to be recognised and addressed. Terms of service are declared unilaterally by suppliers, are expressed in excruciatingly complex ways and often across multiple documents, are designed to be highly advantageous for the supplier and disadvantageous for its customers, and are subject to unilateral change by the supplier with nominal notice to consumers (Clarke 2006b).

As argued much more fully in Clarke (2018), consumer marketing corporations have become dependent on the monitoring of people's electronic behaviour. Data is acquired in ways legal, dubiously legal and probably illegal in at least some jurisdictions - if only a case were able to be brought in order to establish the law. Data is exploited by consolidating it into digital personae, and using those personae firstly as a basis for targetting ads at each individual in such a manner as to manipulate their behaviour, and secondly for micro-pricing offers at the highest that it is judged that each individual will bear. In the digital surveillance economy, corporations are advantaged and consumers disadvantaged, at every step along the sales-generation way.

Consumers lack the market power to contest this state of affairs - and in most cases even lack an appreciation of how the new economy works, partly because of its complexity and sophisticated, and partly because so much of it is conducted surreptitiously. Regulators lack necessary powers and are insufficiently resourced to prosecute consumers' rights. Corporate lawyers lead regulators a merry chase, and attenuate cases across many, long years. Parliaments are meek in the face of large supra-national corporations that are cashed up and speak the language of 'constructive destruction', but also of innovation, prosperity and power.

5.3 Human Rights

The passage of the Universal Declaration of Human Rights (UDHR 1948) and the International Covenant on Civil and Political Rights (ICCPR 1966) once appeared likely to consolidate respect for human rights, not only in relatively free, 'western', economically-advanced, 'northern hemisphere' countries, but progressively in other countries as well. During the subsequent half-century, progress in much of the developing world has been slower and patchier than might have been expected. The relatively free nations, meanwhile, have seen dramatic incursions into freedoms imposed since 2001.

The increased threat-levels of violent acts against semi-random public targets has been used by national security interests to recover much of the influence that they lost with the end of the Cold War c.1990. The much-changed technological context has enabled them to achieve very substantial embedment of surveillance capabilities within information infrastructure, powers to use those capabilities, and freedom from sanctions for abuse of their powers. Privacy Impact Assessments processes (PIAs) are a mainstream control measure against unduly privacy-invasive technologies, excessive law enforcement powers, and unjustifiable applications of them. Yet an evaluation of PIA processes in Australian national security contexts showed that the desires and the activities of agencies were almost entirely unimpeded by such few evaluation processes as were conducted (Clarke 2016c).

5.4 Data Rights

In the context of contemporary information infrastructure, two aspects of 'rights in relation to data' are seriously problematical.

Large corporations claim that innovation is dependent on a shield of intellectual property (IP) laws. In contexts in which substantial investment and long lead-times are involved, this argument is sustainable. In circumstances in which those conditions do not hold, on the other hand, IP laws represent a dead hand that stultifies innovation by nimble new entrants employing disruptive technologies (Dempsey 1999, Clarke & Dempsey 2004). The open source, open open access and open content movements have shown the way in a range of service sectors. Additive manufacturing is now doing the same within branches of secondary industry, and may be ushering in an era of 'hardware as a service' (Santoso & Wicker 2014).

The other imbalance also has an economic dimension, but is fundamentally social in nature. Corporations and governments are laying claim to personal data, and acquiring, storing, re-purposing and trading it. Individuals' rights in relation to data about themselves are limited to those arising under data protection laws. All such laws, including the strongest version, the European Directive of 1995, and its successor with effect from May 2018, the General Data Protection Regulation (GDPR), are subject to large-scale exemptions and exceptions whose intention and effect are to authorise privacy-invasive uses of data, and are designed to be malleable. They have been whittled away by long series of laws authorisings what would otherwise have been privacy breaches, and are currently being further compromised under the banner of 'open data', as the myth is projected of rich data-sets being capable of being de-identified (Clarke 2016d).

5.5 Accountability

Software development is now undertaken using techniques that are far more abstract than was the case with algorithmic languages. Despite discussions of the implications of expert systems as early as the late 1980s (Capper et al. 1988, Tyree 1989, Clarke 1989b), law and policy in relation to such schemes have failed to adapt.

The challenges presented by more recent techniques are even greater. Neural networks, for example, depend firstly on some, often simplistic modelling, and secondly on a heap of empirical data being shovelled into the inevitably inadequate model. The heap may be conveniently available or carefully selected, and it may be small or large, and it may be representative of some population or otherwise. There is no sense of a rationale for the inferences that are drawn from software developed in such ways. The various other forms of technique that have emerged from the artificial intelligence (AI), machine learning (ML) and data analytics / data science communities in many cases also lack any meaningful form of transparency of inferencing (Knight 2017, Clarke 2018a).

These movements embody the abandonment of systemic reasoning, and the championing of empiricism over theory. Empirical techniques are being touted as the new, efficient way to apply computing to the identification of hitherto-missed opportunities. Moreover, the techniques are escaping from research laboratories and being applied within operational systems. This results in a-rational decision-making, i.e. actions by organisations that have escaped the constraint of having to be logically justified. The loss of the decision transparency that existed when applications were developed using earlier generations of software undermines organisations' accountability for their decisions and actions. In the absence of transparency, serious doubts arise about the survival of principles such as evaluation, fairness, proportionality, evidence-based decision-making, and the right to challenge decisions (APF 2013).

Warnings about the risks arising from such naive applications of computing power have been sounded many times (Weizenbaum 1976, Dreyfus 1992, Aggarwal 2018). It is vital that personal ethical responsibility be taken for harm arising from negligently-delegated decisions (Wigan 2015), and that legal obligations and sanctions be imposed on organisations and individuals.

5.6 Income Distribution

The reduction in costs arising from disruptive innovation is beneficial to corporations and some of their stakeholders - such as those employees who continue to have a job. It may benefit organisations' customers, but only to the extent that the cost of the goods and services that the organisation provides are lower than they would otherwise have been. On the other hand, ex-employees have no job, or shift into longer-hour / lower-pay roles, or into part-time under-employment and/or multiple casual and inherently uncertain contracts. With labour-efficiency increased, fewer jobs are available, and hence a smaller percentage of people entering the labour market from school and post-secondary education can gain initial employment and workforce experience. The rapid and substantial changes to the means whereby organisations gain access to human resources therefore has enormous ramifications for income distribution.

No matter how positive a view is taken of the increased efficiency and flexibility of the new form of labour market, the retention of reasonably equitable distribution of income depends on substantial adjustments being made to the welfare and taxation systems, e.g. through such mechanisms as a universal basic income (UBI - van Parijs 2003) or basic living stipend (BLS). The second half of the 20th century had already seen a significant proportion of those not yet, or no longer, of working-age become dependent on transfer payments from the State. These social safety nets have absorbed a significant percentage of government revenue. The exploitation of contemporary information infrastructure appears to be resulting in larger proportions of even the working-age population also becoming dependent on social welfare payments.

National budgets that are already under strain are facing additional and substantial demands. Significant structural changes are urgently needed, and they will probably have to include human resource usage levies on businesses in order to create a sufficiently large pool of funding. In the face of corporate power, such levies will be challenging to introduce, and hence the financial viability of governments comes into question.

Information infrastructure is a major contributor not just to change in the corporate sector, but also to massive disruption of the socio-economic context within which people live. But is information infrastructure solely a (major) part of the problem, or does it also have roles to play in the solution? It appears to be necessary for assessments of IT law to broaden, in order to interlock with the many other branches of law and into other forms of regulation that are relevant to achieving and sustaining reasonable balances among the interests of corporations and individuals. There is also a need for the scope of security to be defined much more broadly than mere 'IT security' and 'data security', and to focus not only on corporate interests but also on those of individuals, groups and communities, and economies, societies and polities.

5.7 Societal and Political Resilience

It was argued above that weak consumer rights frameworks have resulted in corporations dominating consumers and gathering a great deal of data about them. Governments too are increasingly expropriating personal data and exploiting it for further purposes. Many of these are social control purposes, providing agencies with strong weapons against personal behaviour of which they disapprove. Beyond fraud investigations, agencies are imposing much more cross-system enforcement, and intruding into what have hitherto been personal choices, e.g. about food and beverage consumption and exercise, but also the expression of opinions in social media. Human rights frameworks are sufficiently weak that governments suffer limited hindrance in their efforts to achieve dominance over the populace. The private sector is also becoming much more aggressive in relation to the behaviour of staff, contractors, and even customers and activists who draw attention to its activities or otherwise incur corporations' displeasure.

Under these circumstances, as the popular expression has it, 'resistance is futile', because the majority of the population is socially and politically numbed, and hence the few alternative voices are ineffective, easily isolated, and easily countered. Stresses within workplaces are already highly visible. Data protection laws are likely to prove completely inadequate against these assaults on personal space, and it is far from clear that even the strongest forms of data protection, those within the EU, will be adequate to withstand the assault on political freedoms. Rejuvenation and strengthening of laws in these areas are sorely needed, and technical means for the obfuscation and falsification of data, messages, identities, locations and social networks must be matured and deployed rapidly and effectively (Clarke 2016b).

5.8 A Robotic Future?

Claims of an imminent future in which AI actually delivers on its promise have appeared cyclically since c.1950, but few of the promises have been delivered. The current hype-cycle, c. 2013-18, has been particularly energetic, and some elements of the new round of ideas appear to be percolating through to business and government practice. The potentially serious side-effects of empiricist data analytics and machine learning have added new, and in some cases surprising, people to the chorus of cautionary voices, such as cosmologist Stephen Hawking (Cellan-Jones 2014) - who also passed away while this article was being prepared - and battery, electric-car, autonomous-car and space entrepreneur Elon Musk (Sulleyman 2017).

Robotic systems combining AI with actuator technologies might rapidly emerge from factories into open society, might quickly become a dominant economic force, and mights subsequently achieve dominance of a political nature. The authors are sceptical about the more breathless versions of this proposition. Nonetheless, these are tangible possibilities, and the contingent implications for IT law and security need to be investigated. Given the significance of such developments, it is strongly preferable that the precautionary principle be applied and action taken before the next waves of innovation pound onto the social and economic shore.

Information infrastructure of 2018 and beyond is barely recognisable in comparison with the devices, data, and communications and actuator technologies of 1985. The processes that are applied to data are increasingly inscrutable, and are coming to be treated as though they were intelligent. Enthusiasts make the blithe assumption that the nature of the intelligence is sufficiently sophisticated that humans can wash their hands of old-fashioned notions such as transparency and accountability. Furher, the information infrastructure complex has increasing capacity to act directly on the real world, without needing humans to act as intermediaries. Devices are increasingly communicating with one another, and protected enclaves are emergent, in which devices can do so without the likelihood of human monitoring or intervention. An example of this emergent phenomenon is the automated 'data exfiltration' that has been a feature of many recent data breaches (Cheng et al. 2017).

A term is needed to encapsulate the features of the new form that information infrastructure has taken. The notion of 'cyberspace' began as a blizzard of data (Gibson 1984), and further aspects were identified earlier in this article. So perhaps 'cyberspace', given its flexible nature, could be harnessed to that need, by conceptualising it as, say, 'the shared hallucination of artificially intelligent artefacts'. Those artefacts may or may not to choose to permit naturally intelligent humans to participate in the hallucination.

Figure 3 suggests instead a more prosaic model, with 'postmodern information infrastructure' used to encompass the whole, and the emergent control element, shown at the top of the hierarchy, dubbed 'intellectics'.

Figure 3: Postmodern Information Infrastructure

5.9 A Low-Tech Future?

Alternative, less bold futures also need to be investigated. Perhaps technological developments will over-reach themselves. The 'nuclear holocaust' scenario has been contemplated since the late 1940s, and sci-fi has investigated various other 'doomsday scenarios' in which the present era would be followed by a society that had much less, and much less sophisticated, technology than that of 2018. For example, conventional warfare could result in a long interregnum in technological progress, e.g. if it were large-scale and long-term, perhaps as a result of ongoing over-population and competition for scarce resources, and if it led to the destruction of industrial and post-industrial infrastructure and the means of developing it. A currently fashionable variant of this family of scenarios is the prospect of cyberwarfare leading to 'mutually assured destruction' not so much of physical infrastructure as of the means to operate, manage and maintain it.

Another possibility is that 'neo-ludditism' could erupt, with the public, corporations and/or governments turning strongly against pseudo-intelligent technological artefacts. Alternatively, the robotics industry might prove to be capable of generating new devices within a particular, medium-term technological Kondratiev curve, but to lack the adaptive capacity necessary to develop the kinds of new technological paradigms associated with a Schumpeterian shift (Schumpeter 1939). In that case, a variety of possible environmental perturbations could be sufficient for the robotic era to wind down relatively quickly, and there may not be sufficient remaining human capability to generate new momentum. Contenders for such perturbation include the exhaustion of key raw materials, climate change and meteor-strike.


6. Conclusions

A gulf separates the context of 1985 from that of 2018. It would have been a futile endeavour to attempt, at the beginning of CLSR's life, to predict the topics that would be engaging authors and readers 30 years later. The pace of change has increased, and so has the degree of inter-dependency among technological and associated social developments. The challenge to sooth-sayers is even greater now than it was at the time of CLSR's birth.

This article has contrasted the contexts in which CLSR was launched and in which it currently publishes, and suggested a range of implications of the contemporary context that the authors perceive to be relevant to research and practice in computer law and security during the coming years. Because of the rate and complexity of change, many of these observations are of course more like speculations than predictions. They are, however, supported by an amount of evidence and analysis, and many are corollaries of propositions put forward by enthusiastic proponents of the various technologies.

The term 'singularity' was used casually by von Neumann in the 1950s, harnessed by astro-physicists, and then utilised by sci-fi author Vinge to depict a technologically-induced major discontinuity in human existence. It has subsequently been presented as an embarrassingly mystical notion by a couple of IT industry figures who the media love to quote (Moravec 2000, Kurzweil 2005). The large number of threads that this article has identified in the technosocial context of 2018 suggests that an alternative notion that is far more likely to be useful is 'multilarity'. Each of the many elements is capable of acting as a driver of change. However, the multiple confluences among those many threads of development may well have impacts even more significant than those of the more isolated technological changes of the previous few decades.

Artefacts have not achieved sentience and intentionality, both of which are likely to be needed if humankind is to be swept aside or subjugated by artificial descendants. This is, however, as anticipated by utopian and dystopian sci-fi authors alike, one of our children's possible futures. At this stage, humankind's future remains in its own hands, and other alternatives exist, both apocalytpic and more positive from the viewpoint of human life and perhaps life more generally. Technology might yet be managed in order to serve human needs, balances might yet be achieved in human impacts on the biosphere, a spaceshield might yet be put in place, and colonisation of other planets and satellites might yet be enabled.

Every author and reader of CLSR is continually engaged in the identification of currently relevant topics. The purpose of this review, on the other hand, was to stand back a little, adopt a broader view than workaday practice, observation and research can reasonably take, and suggest some threads that may last a longer time into the future.

Laws stimulating, enabling and regulating information technology, information architecture and information infrastructure will continue to be a busy and contested field. Security is also a hectic area, including not merely protections for equipment, software and data, but for individuals, organisations and nations. Beyond effectiveness and efficiency considerations, the second 30 years of the journal will need to recognise and address issues concerning the survival of society and even of the species.


References

Agarwal N., Lim M. & Wigand R. (2012) 'Raising and Rising Voices in Social Media: A Novel Methodological Approach in Studying Cyber-Collective Movements' Business & Information Systems Engineering 4, 3 (2012) 113-126

Aggarwal A. (2018) 'The Current Hype Cycle in Artificial Intelligence' Scry Analytics, January 2018, at https://scryanalytics.ai/the-current-hype-cycle-in-artificial-intelligence/

Andrejevic M. (2014) 'The Big Data Divide' International Journal of Communication 8 (2014), 1673-1689, at http://espace.library.uq.edu.au/view/UQ:348586/UQ348586_OA.pdf

Angwin J., Savage C., Larson J., Moltke H., Poitras L. & Risen J. (2015) 'AT&T Helped U.S. Spy on Internet on a Vast Scale' The New York Times, 15 August 2015, at https://www.nytimes.com/2015/08/16/us/politics/att-helped-nsa-spy-on-an-array-of-internet-traffic.html

APF (2013) 'Meta-Principles for Privacy Protection' Australian Privacy Foundation, March 2013, at https://privacy.org.au/policies/meta-principles/

Balkin J.M. & Levinson S. (2006) 'The Processes of Constitutional Change: From Partisan Entrenchment to the National Surveillance State' Yale Law School Faculty Scholarship Series, Paper 231, at http://digitalcommons.law.yale.edu/fss_papers/231

Barlow J.P. (1996) 'A Declaration of the Independence of Cyberspace' Electronic Frontier Foundation, 1 February 1996, at https://www.eff.org/cyberspace-independence

Baumer E.P.S. (2015) 'Usees' Proc. 33rd ACM Conf. on Human Factors in Computing Systems (CHI'15), April 2015

Baxter G. & Sommerville I. (2011) 'Socio-technical systems: From design methods to systems engineering' Interacting with Computers 23 (2011) 4-17

Beniger J. (2009) 'The control revolution: Technological and economic origins of the information society' Harvard University Press, 2009

Bing J. & Selmer K.S. (1980) 'A Decade of Computers and Law' Universitetsforlaget, Oslo, 1980

Birch D, & Buck P. (1992) 'What is Cyberspace?' Computer Law & Security Review 8, 2 (Mar-Apr 1992) 74-76

Brunner J. (1975) 'The Shockwave Rider' Ballantine, 1975

Cadwalladr C. & Graham-Harrison E. (2018) 'Revealed: 50 million Facebook profiles harvested for Cambridge Analytica in major data breach' The Guardian, 18 March 2018, at https://www.theguardian.com/news/2018/mar/17/cambridge-analytica-facebook-influence-us-election

Capper P., Susskind R. & Neill J. (1988) 'Latent damage law: the expert system' Butterworths, 1988

Cellan-Jones R. (2014) 'Stephen Hawking warns artificial intelligence could end mankind' BBC News, 2 December 2014, at http://www.bbc.com/news/technology-30290540

Chen Y. & Cheung A. (2017) 'The Transparent Self Under Big Data Profiling: Privacy and Chinese Legislation on the Social Credit System' University of Hong Kong Faculty of Law Research Paper No. 2017/011 , at https://ssrn.com/abstract=2992537

Cheng L., Liu F & Yao D. (2017) 'Enterprise data breach: causes, challenges, prevention, and future directions' Data Mining and Knowledge Discovery 7, 5 (September/October 2017), at https://onlinelibrary.wiley.com/doi/full/10.1002/widm.1211

Cherry S. (2005) 'The Net Effect' IEEE Spectrum, 1 June 2005, at https://spectrum.ieee.org/computing/networks/the-net-effect

Chircu A.M. & Kauffman R.J. (1999) 'Strategies for Internet Middlemen in the Intermediation / Disintermediation / Reintermediation Cycle' Electronic Markets 9, 1/2 (1999) 109-117

Christl W. & Spiekermann S. (2016) 'Networks of Control: A Report on Corporate Surveillance, Digital Tracking, Big Data & Privacy' Facultas, Wien, 2016

Clarke R. (1988) 'Information Technology and Dataveillance' Commun. ACM 31,5 (May 1988) 498-512, PrePrint at http://www.rogerclarke.com/DV/CACM88.html

Clarke R. (1989a) 'Who Is Liable for Software Errors? Proposed New Product Liability Law in Australia' Computer Law & Security Report 5, 1 (May-June 1989) 28-32, PrePrint at http://www.rogerclarke.com/SOS/PaperLiaby.html

Clarke R. (1989b) 'Property Rights in Knowledge-Based Products and Applications' Expert Systems 6,3 (August 1989), Preprint at http://www.rogerclarke.com/SOS/KBTL.html

Clarke R. (1991) 'A Contingency Approach to the Application Software Generations' Database 22, 3 (Summer 1991) 23 - 34, PrePrint at http://www.rogerclarke.com/SOS/SwareGenns.html

Clarke R. (1992) 'Extra-Organisational Systems: A Challenge to the Software Engineering Paradigm' Proc. IFIP World Congress, Madrid, September 1992, PrePrint at http://www.rogerclarke.com/SOS/PaperExtraOrgSys.html

Clarke R. (1994a) 'The Digital Persona and its Application to Data Surveillance' The Information Society 10,2 (June 1994) 77-92, PrePrint at http://www.rogerclarke.com/DV/DigPersona.html

Clarke R. (1994b) 'Information Technology: Weapon of Authoritarianism or Tool of Democracy?' Proc. IFIP World Congress, Hamburg, September 1994, PrePrint at http://www.rogerclarke.com/DV/PaperAuthism.html

Clarke R. (1994c) 'Information Infrastructure for The Networked Nation' Xamax Consultancy Pty Ltd, November 1994, at http://www.rogerclarke.com/II/NetNation.html

Clarke R. (1995) 'Net-Ethiquette: Mini Case Studies of Dysfunctional Human Behaviour on the Net' Xamax Consultancy Pty Ltd. April 1995, at http://www.rogerclarke.com/II/Netethiquettecases.html

Clarke R. (1999a) 'Ethics and the Internet: The Cyberspace Behaviour of People, Communities and Organisations' Bus. & Prof'l Ethics J. 18, 3&4 (1999) 153-167, PrePrint at http://www.rogerclarke.com/II/IEthics99.html

Clarke R. (1999b) 'The Willingness of Net-Consumers to Pay: A Lack-of-Progress Report' Proc. 12th International Bled Electronic Commerce Conference, Bled, Slovenia, June 1999, PrePrint at http://www.rogerclarke.com/EC/WillPay.html

Clarke R. (1999c) 'Freedom of Information? The Internet as Harbinger of the New Dark Ages' First Monday 4, 11 (November 1999), PrePrint at http://www.rogerclarke.com/II/DarkAges.html

Clarke R. (2000) "Information Wants to be Free ..." Xamax Consultancy Pty Ltd, February 2000, at http://www.rogerclarke.com/II/IWtbF.html

Clarke R. (2001a) 'Person-Location and Person-Tracking: Technologies, Risks and Policy Implications' Information Technology & People 14, 2 (Summer 2001) 206-231, PrePrint at http://www.rogerclarke.com/DV/PLT.html

Clarke R. (2001b) ' Paradise Gained, Paradise Re-lost: How the Internet is being Changed from a Means of Liberation to a Tool of Authoritarianism' Mots Pluriels 18 (August 2001), PrePrint at http://www.rogerclarke.com/II/PGPR01.html

Clarke R. (2002) 'The Birth of Web Commerce' Xamax Consultancy Pty Ltd, October 2002, at http://www.rogerclarke.com/II/WCBirth.html

Clarke R. (2004a) 'Origins and Nature of the Internet in Australia' Xamax Consultancy Pty Ltd, January 2004, version published as 'The Emergence of the Internet in Australia: From Researcher's Tool to Public Infrastructure' Chapter 3 of 'Virtual Nation: The Internet in Australia' Goggin G. (ed.), UNSW Press 2004, at http://www.rogerclarke.com/II/OzI04.html

Clarke R. (2004b) 'Very Black 'Little Black Books'' Working Paper, Xamax Consultancy Pty Ltd, February 2004, at http://www.rogerclarke.com/DV/ContactPITs.html

Clarke R. (2006a) 'National Identity Schemes - The Elements' Working Paper, Xamax Consultancy Pty Ltd, February 2006, at http://www.rogerclarke.com/DV/NatIDSchemeElms.html

Clarke R. (2006b) 'A Major Impediment to B2C Success is ... the Concept 'B2C'' Invited Keynote, Proc. ICEC'06, Fredericton NB, Canada, August 2006, PrePrint at http://www.rogerclarke.com/EC/ICEC06.html

Clarke R. (2007) 'What 'Überveillance' Is, and What To Do About It' Invited Keynote, Proc. 2nd RNSA Workshop on the Social Implications of National Security (Michael & Michael (2007), revised version published as 'What is Überveillance? (And What Should Be Done About It?)' IEEE Technology and Society 29, 2 (Summer 2010) 17-25, PrePrint at http://www.rogerclarke.com/DV/RNSA07.html

Clarke R. (2008a) 'Web 2.0 as Syndication' Journal of Theoretical and Applied Electronic Commerce Research 3,2 (August 2008) 30-43, at http://www.jtaer.com/portada.php?agno=2008&numero=2#

Clarke R. (2008b) 'Dissidentity: The Political Dimension of Identity and Privacy' Identity in the Information Society 1, 1 (December 2008) 221-228, PrePrint at http://www.rogerclarke.com/DV/Dissidentity.html

Clarke R. (2010a) 'User Requirements for Cloud Computing Architecture' Proc. 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing, Melbourne, Australia, May 2010, pp. 625-630, PrePrint at http://www.rogerclarke.com/II/CCSA.html

Clarke R. (2010b) 'Computing Clouds on the Horizon? Benefits and Risks from the User's Perspective' Proc. 23rd Bled eConference, 21-23 June 2010, PrePrint at http://www.rogerclarke.com/II/CCBR.html

Clarke R. (2011) 'The Cloudy Future of Consumer Computing' Proc. 24th Bled eConference, PrePrint at http://www.rogerclarke.com/EC/CCC.html

Clarke R. (2012a) 'How Reliable is Cloudsourcing? A Review of Articles in the Technical Media 2005-11' Computer Law & Security Review 28, 1 (February 2012) 90-95, PrePrint at http://www.rogerclarke.com/EC/CCEF-CO.html

Clarke R. (2012b) 'A Framework for the Evaluation of CloudSourcing Proposals' Proc. 25th Bled eConference, June 2012, PrePrint at http://www.rogerclarke.com/EC/CCEF.html

Clarke R. (2012c) 'Infrastructure for Electronic Interaction: The State of Play in 1987' Working Paper, Xamax Consultancy Pty Ltd, August 2012, at http://www.rogerclarke.com/II/IEI-87.html

Clarke R. (2013a) ' Challenges Facing the OECD's Revised Security Guidelines' Internet Technical Advisory Committee (ITAC) Newsletter, December 2013, at http://www.internetac.org/archives/1865, full PrePrint at http://www.rogerclarke.com/SOS/OECDS-1311.html

Clarke R. (2013b) 'Data Risks in the Cloud' Journal of Theoretical and Applied Electronic Commerce Research (JTAER) 8, 3 (December 2013) 60-74, PrePrint at http://www.rogerclarke.com/II/DRC.html

Clarke R. (2014a) 'Key Factors in the Limited Adoption of End-User PETs' Proc. Politics of Surveillance Workshop, University of Ottawa, May 2014, PrePrint at http://www.rogerclarke.com/DV/UPETs-1405.html

Clarke R. (2014b) 'Promise Unfulfilled: The Digital Persona Concept, Two Decades Later' Information Technology & People 27, 2 (June 2014) 182 - 207, PrePrint at http://www.rogerclarke.com/ID/DP12.html

Clarke R. (2014c) 'Understanding the Drone Epidemic' Computer Law & Security Review 30, 3 (June 2014) 230-246, PrePrint at http://www.rogerclarke.com/SOS/Drones-E.html

Clarke R. (2015) 'The Prospects of Easier Security for SMEs and Consumers' Computer Law & Security Review 31, 4 (August 2015) 538-552, PrePrint at http://www.rogerclarke.com/EC/SSACS.html

Clarke R. (2016a) 'Big Data, Big Risks' Information Systems Journal 26, 1 (January 2016) 77-90, PrePrint at http://www.rogerclarke.com/EC/BDBR.html

Clarke R. (2016b) 'A Framework for Analysing Technology's Negative and Positive Impacts on Freedom and Privacy' Datenschutz und Datensicherheit 40, 1 (January 2016) 79-83, PrePrint at http://www.rogerclarke.com/DV/Biel15-DuD.html

Clarke R. (2016c) 'Privacy Impact Assessments as a Control Mechanism for Australian National Security Initiatives' Computer Law & Security Review 32, 3 (May-Jun 2016) 403-418, at http://www.rogerclarke.com/DV/IANS.html

Clarke R. (2016d) 'Quality Assurance for Security Applications of Big Data' Proc. EISIC'16, Uppsala, August 2016, PrePrint at http://www.rogerclarke.com/EC/BDQAS.html

Clarke R. (2016e) 'Can We Productise Secure eWorking Environments?' Proc. 11th IFIP Summer School on Privacy and Identity Management, August 2016, Karlstad, Sweden, PrePrint at http://www.rogerclarke.com/DV/SeWE16.html

Clarke R. (2017) 'Can Small Users Recover from the Cloud?' Computer Law & Security Review 33, 6 (December 2017) 754-767, PrePrint at http://www.rogerclarke.com/EC/PBAR-SP.html

Clarke R. (2018a) 'Guidelines for the Responsible Application of Data Analytics' Forthcoming, Computer Law & Security Review 34, 3 (Jul-Aug 2018), PrePrint at http://www.rogerclarke.com/EC/GDA.html

Clarke R. (2018b) 'Risks Inherent in the Digital Surveillance Economy: A Research Agenda' Forthcoming, Journal of Information Technology (2018), PrePrint at http://www.rogerclarke.com/EC/DSE.html

Clarke R. & Dempsey G. (2004) 'The Economics of Innovation in the Information Industries' Xamax Consultancy Pty Ltd, April 2004, at http://www.rogerclarke.com/EC/EcInnInfInd.html

Clarke R. & Maurushat A. (2007) 'The Feasibility of Consumer Device Security' J. of Law, Information and Science 18 (2007), PrePrint at http://www.rogerclarke.com/II/ConsDevSecy.html

Clarke R. & Nees S. (2000) 'Technological Protections for Digital Copyright Objects' Proc. 8th Euro. Conf. Infor. Sys. (ECIS'2000), July 2000, pp. 745-752, PrePrint at http://www.rogerclarke.com/II/TPDCO.html

Clarke R. & Pucihar A. (2013) 'Electronic Interaction Research 1988-2012 through the Lens of the Bled eConference' Electronic Markets 23, 4 (December 2013) 271-283, PrePrint at http://www.rogerclarke.com/EC/EIRes-Bled25.html

Clarke R. & Wigan M.R. (2011) 'You Are Where You've Been The Privacy Implications of Location and Tracking Technologies' Journal of Location Based Services 5, 3-4 (December 2011) 138-155, PrePrint at http://www.rogerclarke.com/DV/YAWYB-CWP.html

Cleland S. (2011) 'Google's 'Bait & Switch' Deception Exposed at Hearing' Forbes Magazine, 22 September 2011, at https://www.forbes.com/sites/scottcleland/2011/09/22/googles-bait-switch-deception-exposed-at-hearing/

Dauncey H. (1997) 'A cultural battle: French Minitel, the Internet and the superhighway' Convergence 3, 3 (Sep 1997) 72-89

Davies D. (1985) 'Insurance and the Hacker' Computer Law & Security Review 1, 2 (August 1985) 4-7

Davies S. (1997) 'Time for a byte of privacy please' Index on Censorship 26, 6 (1997) 44-48

DeSanctis G. & Gallupe B. (1984) 'Group decision support systems: A new frontier' Database 16, 2 (Winter 1984) 3-10

Dempsey G. (1999) 'Revisiting Intellectual Property Policy: Information Economics for the Information Age' Prometheus 17, 1 (March 1999) 33-40, PrePrint at http://www.rogerclarke.com/II/DempseyProm.html

Dennis A., George J., Jessup L., Nunamaker J. & Vogel D. (1988) 'Information Technology to Support Electronic Meetings' MIS Quarterly 12, 4 (December 1988) 591-624

Diaconescu A. & Pitt J. (2017) ' Technological Impacts in Socio-Technical Communities' IEEE Technology & Society 36, 3 (September 2017) 63-71

Dreyfus H.L. (1992) 'What Computers Still Can't Do: A Critique of Artificial Reason' MIT Press, 1992Drucker P.F. (1968) 'The Age of Discontinuity' Pan Piper, 1968

Emery F.E. (1959) 'Characteristics of Soco-Technical Systems' Tavistock, 1959

Farraj A. (2010) 'Refugees And The Biometric Future: The Impact Of Biometrics On Refugees And Asylum Seekers' Columbia Human Rights Law Review 42 (2010) 891-941, at https://ael.eui.eu/wp-content/uploads/sites/28/2013/04/07-Rijpma-Background4-Refugees-and-Biometrics.pdf

Fischer-Hübner S. & Lindskog H. (2001) 'Teaching Privacy-Enhancing Technologies' Proc. IFIP WG 11.8 2nd World Conference on Information Security Education, Perth, 2001

Fleischer V. (2010) 'Regulatory arbitrage' Tex. L. Rev. 89 (2010) 227, at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1567212

Fuchs C. (2018) 'Digital Demagogue: Authoritarian Capitalism in the Age of Trump and Twitter' Pluto Press, 2018

Gibson J.J. (1977) 'The theory of affordances' In R.E. Shaw & J. Bransford (eds.) 'Perceiving, Acting, and Knowing' Lawrence Erlbaum Associates, 1977

Gibson W. (1984) 'Neuromancer' Grafton/Collins, London, 1984

Goldberg L.R. (1990) 'An alternative 'Description of personality'': The Big-Five structure' Journal of Personality and Social Psychology 59 (1990) 1216-1229Goldberg L.R. (1999) 'A broad-bandwidth, public-domain, personality inventory measuring the lower-level facets of several Five-Factor models' Ch. 1 in Mervielde I. et al. (eds.) 'Personality Psychology in Europe', vol. 7, Tilburg Uni. Press, 1999, pp. 7-28, at http://projects.ori.org/lrg/PDFs_papers/A%20broad-bandwidth%20inventory.pdf

Gunter B. & Furnham A. (1992) 'Consumer Profiles: An Introduction to Psychographics' Routledge, 1992

Harris S. (2010) 'The Watchers: The Rise of America's Surveillance State' Penguin, 2010

Hayes-Roth F., Waterman D.A. & Lenat D.B. (1983) 'Building Expert Systems' Longman, 1983

Higgs E. (2003) 'The information state in England: The central collection of information on citizens since 1500' Palgrave Macmillan, 2003

Hiltz S.R. & Turoff M. (1978) 'The network nation: human communication via computer' Addison-Wesley, 1978 

Hiltz S.R. & Turoff M. (1981) 'The evolution of user behavior in a computerized conferencing system' Commun. ACM 24, 12 (November 1981) 739-751

Hirst M. (2013) `Someone's looking at you: welcome to the surveillance economy' The Conversation, 26 July 2013, at http://theconversation.com/someones-looking-at-you-welcome-to-the-surveillance-economy-16357

Howe E. (1985) 'First Public Statement from the Data Protection Registrar' Computer Law & Security Review 1, 1 (June 1985) 11-1

Hsu J. (2014) 'U.S. Suspicions of China's Huawei Based Partly on NSA's Own Spy Tricks' IEEE Spectrum, 26 March 2014, at https://spectrum.ieee.org/tech-talk/computing/hardware/us-suspicions-of-chinas-huawei-based-partly-on-nsas-own-spy-tricks

ICCPR (1996) 'International Covenant on Civil and Political Rights' United Nations, 1966, at http://www.ohchr.org/en/professionalinterest/pages/ccpr.aspx

Kemp T., Hinton P. & Garland P. (2011) 'Legal rights in data' Computer Law & Security Review 27 (2011) 139-151Kling R. (1999) 'What is Social Informatics and Why Does it Matter? DLib Magazine 5, 1 (January 1999), at http://www.dlib.org/dlib/january99/kling/01kling.html

Kornai A. (2013) 'Digital Language Death' PLOS One, 22 October 2013, at http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0077056

Knight W. (2017) 'The Dark Secret at the Heart of AI' 11 April 2017, MIT Technology Review https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/

Kramer R. (1993) 'The politics of information: A study of the French Minitel system' pp. 53-86 in Schement J & Ruben B (eds.) 'Between communication and information' vol. 4, Transaction Press, 1993

Kurzweil R. (2005) 'The Singularity Is Near: When Humans Transcend Biology' Viking, 2005

Land, F.F., (2000) 'Evaluation in a Socio-Technical Context' Proc. IFIP Working Group 8.2 Working Conference 2000, Aalberg, Denmark, June 2000, and reprinted as Chapter 8 in Baskerville R., Stage J. & DeGross, J.I. 'Organizational and Social Perspectives on Information Technology' Kluwer, pp.115- 126, at http://link.springer.com/content/pdf/10.1007/978-0-387-35505-4_8.pdf

Lundstrom S.F. & Larsen R.L. (1985) 'Computer and Information Technology in the Year 2000 - Projection' IEEE Computer 18, 9 (September 1985) 68-79

Lupton D. (2016) 'The Quantified Self: A Sociology of Self-Tracking' Polity Press, 2016

Lyon D. (1994) 'The electronic eye: The rise of surveillance society' Polity Press, 1994

Lyon D. (2001) 'Surveillance society' Open University Press, 2001

McCubbrey D.J. (1999) 'Disintermediation and Reintermediation in the U.S. Air Travel Distribution Industry: A Delphi Study ' Commun. AIS 1, 18 (June 1999)

Manwaring K. & Clarke R. (2015) '' Computer Law & Security Review 31,5 (October 2015) 586--603, PrePrint at http://www.rogerclarke.com/II/SSRN-id2613198.pdf

Masuda Y. (1981) 'The Information Society as Post-Industrial Society' World Future Society, Bethesda, 1981

Michael K. & Michael M.G. (2007) 'From dataveillance to Uberveillance and the realpolitik of the transparent society' Proc. Second workshop on the Social Implications of National Security, October 2007, Wollongong University Press

Michael M.G. & Michael K. (2006) 'National Security: The Social Implications of the Politics of Transparency' Prometheus 24, 4 (December 2006 ) 359-363

Miller A.R. (1972) 'The Assault on Privacy' Mentor, 1972

Moravec H. (2000) 'Robot: Mere Machine to Transcendent Mind' Oxford University Press, 2010

Needham K. (2018) 'Big brother stops millions boarding planes, trains in China' The Canberra Times, 6 March 2016, at http://www.canberratimes.com.au/world/big-brother-stops-millions-boarding-planes-trains-in-china-20180306-p4z33u.html

Nelson T.H. (1965) 'Complex information processing: a file structure for the complex, the changing and the indeterminate' Proc. 20th ACM National Conf., 1965, pp. 84-100

Nelson T.H. (1980) 'Literary Machines' Xanadu Project, editions 1980-1993

Nielsen J. (1990) 'Hypertext and Hypermedia' Academic Press, 1990

Norman D.A. (1999) 'Affordance, conventions, and design' Interactions 6, 3 (May 1999) 38-43

Okediji R. (2001) 'TRIPs Dispute Settlement and the Sources of (International) Copyright Law' Journal of the Copyright Society of the U.S.A. 49 (2001) 585-648

van Parijs P. (2003) 'Basic Income: A simple and powerful idea for the 21st century' pp. 4-39 of Wright E.O. (ed.) ' Redesigning Distribution' University of Wisconsin Madison, May 2003, at https://www.ssc.wisc.edu/soc/faculty/pages/wright/RUP-vol-V.pdf

Pasquale F. (2015) 'The black box society: The secret algorithms that control money and information' Harvard University Press, 2015

Pollack A. (1992) ''Fifth Generation' Became Japan's Lost Generation' The New York Times, 5 June 1992, at https://www.nytimes.com/1992/06/05/business/fifth-generation-became-japan-s-lost-generation.html

Porat M.U. (1977) 'The Information Economy: Definition and Measurement' Office of Telecommunications, Washington DC, May 1977, at https://files.eric.ed.gov/fulltext/ED142205.pdf

Rao U. & Greenleaf G.W. (2013) 'Subverting ID from above and below: The uncertain shaping of India's new instrument of e-governance' Surveillance & Society 11, 3 (2013), at https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/India_ID/India_ID

Rheingold H. (1994) 'The Virtual Community' Secker & Warburg, 1994

Rosenberg R.S. (1986) 'Computers and the information Society' Wiley, 1986

Santoso S.M. & Wicker S.B. (2014) 'The future of three-dimensional printing: Intellectual property or intellectual confinement?' New Media & Society 18, 1 (2014) 138-155, at http://journals.sagepub.com/doi/pdf/10.1177/1461444814538647

Saxby S. (1985a) 'Security Case Study' Computer Law & Security Review 1, 1 (June 1985) 5-7

Saxby S. (1985b) 'Editorial' Computer Law & Security Review 1, 2 (August 1985) 1

Schmidt E. & Cohen J. (2014) 'The New Digital Age: Reshaping the Future of People, Nations and Business' Knopf, 2013

Schneier B. (2015a) 'Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World' Norton, March 2015

Schneier B. (2015b) 'How to mess with surveillance' Slate, 2 March 2015, at http://www.slate.com/articles/technology/future_tense/2015/03/data_and_goliath_excerpt_the_best_ways_to_undermine_surveillance.html

Schumpeter J. (1939) 'Business Cycles: A Theoretical, Historical and Statistical Analysis of the Capitalist Process' McGraw-Hill, 1939

Sulleyman A. (2017) 'Elon Musk: AI is a 'fundamental existential risk for human-civilisation' and creators must slow down' The Independent, 17 July 2017, at https://www.independent.co.uk/life-style/gadgets-and-tech/news/elon-musk-ai-human-civilisation-existential-risk-artificial-intelligence-creator-slow-down-tesla-a7845491.html

Tapper C. (1982) 'Computer Law' Longman, 1982

Tapscott D. (1996) 'The Digital Economy' McGraw-Hill, 1996

Temperton J. (2015) 'Languages are dying, but is the internet to blame?' Wired Magazine, 26 September 2015, at http://www.wired.co.uk/article/linguistic-diversity-online

Toffler A. (1980) 'The Third Wave' Collins, London, 1980

Trist E. (1981) 'The Evolution of Socio-Technical Systems' Occasional Paper No. 2, Ontario Ministry of Labour, June 1981, at http://www.lmmiller.com/blog/wp-content/uploads/2013/06/The-Evolution-of-Socio-Technical-Systems-Trist.pdf

Turney D. (2018) 'Is consumer 3D printing over before it begins?' itNews, 19 February 2018, at https://www.itnews.com.au/feature/is-consumer-3d-printing-over-before-it-begins-485279

Tyree A. (1989) 'Expert Systems in Law' Prentice-Hall, 1989

UDHR (1948) 'Universal Declaration of Human Rights' United Nations, 10 December 1948, at http://www.un.org/en/documents/udhr/index.shtml

Vogel D., Nunamaker J., Martz B., Grohowski R. & McGoff, C. (1990) 'Electronic Meeting System Experience at IBM' Journal of MIS 6, 3 (Winter 1990) 25-43

Weingarten F.W. (1988) 'Communications Technology: New Challenges to Privacy' J. Marshall L. Rev. 21, 4 (Summer 1988) 735

Weizenbaum J. (1976) 'Computer Power and Human Reason' W.H.Freeman & Co. 1976, Penguin 1984

Wigan M.R. (1986) 'Engineering Tools for Building Knowledge-Based Systems' Computer-Aided Civil and Infrastructure Engineering 1, 1 (1986) 52-68

Wigan M.R. (1989) 'Computer Cooperative Work: Communications, Computers and Their Contribution to Working in Groups' Proc. IFIP TC9 Conference on Shaping Organisations Shaping Technology, Terrigal, May 1989, republ. Clarke R. & Cameron J. (eds.) 'Managing Information Technology's Organisational Impact' North-Holland, 1991, pp. 15-28

Wigan M.R. (1991) 'Data Ownership' Proc. IFIP TC9 Conference on Shaping Organisations Shaping Technology, Adelaide, October 1991, pp. 305-318

Wigan M.R. (2014a) 'Überveillance and faith-based organizations: a renewed moral imperative' in M.G. Michael M.G. & Michael K. (eds.) 'Uberveillance and the Social Implications of Microchip Implants: Emerging Technologies' IFGI Global, 2014, pp. 408-416

Wigan M.R. (2014b) 'Transport Replacement and Sustainability aspects associated with Additive manufacturing' Oxford Systematics, 2014, at http://works.bepress.com/mwigan/6/

Wigan M.R. (2015) 'Big Data - Can Virtue Ethics Play a Role?' Melbourne University, 2015, at https://works.bepress.com/mwigan/29/

Wigan M.R. & Clarke R. (2013) 'Big Data's Big Unintended Consequences' IEEE Computer 46,  6 (June 2013) 46-53, PrePrint at http://www.rogerclarke.com/DV/BigData-1303.html

Williams K., Lenstra N., Ahmed S. & Liu Q. (2013) 'Research note: Measuring the globalization of knowledge: The case of community informatics' First Monday 18, 8 (August 2013), at http://firstmonday.org/ojs/index.php/fm/article/view/4347/3737

Zuboff, S. (2015). Big other: Surveillance capitalism and the prospects of an information civilization. Journal of Information Technology, 30, 75-89, at https://cryptome.org/2015/07/big-other.pdf

Zuboff S. (2016) 'The Secrets of Surveillance Capitalism' Frankfurter Allgemeine, 3 May 2016, at http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-of-surveillance-capitalism-14103616.html


Acknowledgements

The authors benefitted from feedback from Profs. Graham Greenleaf (UNSW), Kayleen Manwaring (UNSW), Doug Vogel (Harbin) and Tom Worthington (ANU) and Russell Clarke, and especially from Profs. Frank Land (LSE), Don McCubbrey (Uni of Denver) and Bruce Arnold (Uni. of Canberra).


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.

Marcus Wigan is Principal of Oxford Systematics, based in Melbourne. He has qualifications in a wide range of fields from physics to psychology, and business to intellectual property law and musicology. He has held professorial appointments in multiple disciplines at Napier University Edinburgh (where he is an Emeritus Professor), Imperial College London, University of Sydney, University of Melbourne, Swinburne University and Wollongong University. He has a DPhil from Oxford University in nuclear physics, Masters degrees from Oxford, Monash and Melbourne, and he is a Fellow of the Australian Computer Society. He has worked on the societal aspects of transport, surveillance and privacy both as an engineer and policy analyst.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 75 million in late 2024.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 26 November 2017 - Last Amended: 10 April 2018 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/II/IIC18.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2024   -    Privacy Policy