Roger Clarke's Web-Site

 

© Xamax Consultancy Pty Ltd,  1995-2021


Roger Clarke's ' Surveillance and Responsible AI?'

Responsible Application of Artificial Intelligence to Surveillance:
What Prospects?

Review Draft of 9 October 2021

For submission to a Special Issue of Information Polity
on 'Questioning Modern Surveillance Technologies'

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2021

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://rogerclarke.com/DV/AIP-S.html


Abstract

Artifical Intelligence (AI) is one of the most significant of the information and communication technologies (ICT) that are being applied to surveillance. Its proponents argue that its promise is great. AI's detractors, on the other hand, draw attention to a great many threats to public interests embodied in AI. Many of these threats are very different from those that arise with previous tools for drawing inferences from data. There were already many circumstances in which existing safeguards were inadequate. Avoiding undue harm from AI applications to surveillance requires enhanced and new safeguards.

This article gives consideration to the full gamut of regulatory mechanisms that may provide protection. The scope extends from natural and infrastructural regulatory mechanisms, via self-regulation, including the recently-popular field of 'ethical principles', to co-regulatory and formal approaches. An evaluation is provided of the adequacy or otherwise of the world's first proposal for formal regulation of AI practices and systems. In order to lay the groundwork for those analyses, overviews are provided of both the many forms of surveillance, and the nature and relevant categories of AI.

The conclusion reached is that, despite the incursions that AI proponents claim are already being made into many fields, and the threats inherent in AI, the current safeguards are abysmally inadequate, and the prospects for rapid improvement are far from good.


Contents


1. Introduction

The scope for harm to arise from Artificial Intelligence (AI) has been recognised by technology providers, user organisations, policy-makers and the public alike. On the other hand, effective management of the risks inherent in its application has been almost entirely absent. Many users of Information & Communcations Technologies (ICT) for surveillance purposes have been successful in avoiding meaningful regulation of their activities. What are the prospects of AI's use for surveillance being brought under control?

This article undertakes an assessment of those prospects. In order to reach a conclusion, it undertakes surveys of multiple elements of the context. It commences by reviewing the many categories of surveillance. That is followed by an overview of AI, firstly in the abstract, then moving on to sub-fields of AI with apparent relevance to surveillance. An appreciation of the characteristics of the technologies enables the identification of disbenefits and risks involved in AI's application to surveillance. A review is then undertaken of the ways in which control might be exercised. Particular attention is paid to the wave of publishing activity during the period 2015-21 in the area of 'Principles for Responsible AI'. The analysis draws on a previously-published, consolidated super-set of Principles.

Almost all of the publications to date have been 'Guidelines', lacking enforceability, and in most cases having little or no volitional power that might materially influence AI practice. A critique is provided of the proposal of 21 April 2021 of the European Commission, which appears to be a world-first initiative to establish formal regulation of a bespoke nature for AI applications. The article concludes with an assessment of the prospects of effective control being achieved over AI applications to surveillance even by organisations with limited market and institutional power, let alone by large corporations and government agencies.


2. A Framework for Surveillance Analysis

This section adopts the framework for surveillance analysis in Clarke (2009). This commences by considering the contexts of surveillance. The original sense of the word, adopted from French, was of 'watching over', possibly watching over things, but primarily watching over people. It was inherently physical, spatial and visual. In the mid-twentieth century, Foucault (1975/1977) revived the application of the idea to panoptic efficiency in prisons (Bentham 1791), and reinforced the visual metaphor.

There have been many extensions to the contexts of surveillance beyond the visual (e.g. Petersen 2012, Marklund & Skouvig 2021). Aural monitoring of sound, and particularly the human voice, has a long history. Observation can be conducted at distance with the aid of tools such as telescopic lenses and directional microphones. Retrospective surveillance began with hand-recording of the visual by textual descriptions and depictions, and of voice by textual transcriptions. Photographic means became available from the mid-19th century, and sound recording from the turn of the 20th century.

Message-carriage services by 'mail' or 'post' date back millenia, and some countries have had extensive and reliable services for the general public since about 1800. The early emergence of communications surveillance is evidenced by the use of simple substitution cyphers in the Roman Empire. Beyond geo-space, and as early as the 1840s, the telegraph gave rise to electronic surveillance, which extended to telephone wiretapping from the 1860s (Jepson 2014), establishing surveillance in the new context now referred to as 'virtual space'. Recording of digital data passing through communications networks became possible by, at latest, the middle of the 20th century (Hosein & Palow 2013). The US National Security Agency has used the catchcry of "you need the haystack to find the needle" to justify its desire for mass surveillance of all communications channels (Nakashima & Warrick 2013).

With the application of computing to administrative tasks commencing in 1952, it quickly became more efficient to observe not the people themselves but the data generated about them. This has become widely referred to as dataveillance (Clarke 1988). The data sources were initially byproducts of individuals' interactions with organisations, with the data co-opted or expropriated to additional purposes. Multiple data-sets were commonly physically or virtually consolidated with data from other sources, frequently without consideration of the extent to which the data-sets were mutually compatible. In addition, new forms of data-gathering emerged for the specific purpose of monitoring behaviour. Organisations sought to reduce the unreliability of data consolidation activities by creating cross-indexes among different identification schemes, and imposing multi-use identifiers on people (Clarke 1994).

Organisational efficiency was improved by providing individuals with artefacts carrying pre-recorded identifying data. Much of the effort and cost involved in data capture has been progressively transferred from organisations to individuals, initially through ATMs, then web-forms. Since the late 20th century, the public has been enlisted as donors of copious personal data through the carriage and use of promiscuous handheld devices. This is a form of auto-surveillance, that is to say 'surveillance of the self, by the self' (Clarke 2020b). The phenomenon has been extended beyond handhelds to wearable 'wellness' devices adjacent to and on individuals' bodies, and to implanted chips for identification and for monitoring of physiological phenomena such as heart-rate. Automated data capture by artefacts has also become common, with household appliances, electricity meters and motor vehicles streaming personal data to corporations.

A further development, primarily since about 2000, is experience surveillance (Clarke 2014). Almost all forms of searching for, gaining access to, and experiencing text, data, image and video, together with access to live events, migrated from mostly-anonymous analogue to mostly-identified digital forms during a remarkably short time between the emergence of the Web c.1993 and its subversion by Web 2.0 c.2005 (Clarke 2008, Fuchs 2011). By substituting services dependent on remote parties, the public was lulled into a new norm of full disclosure to networks of service-providers of not only their interests and social networks, but also their influences and intellectual associations.

The primarily technical perspectives on surveillance outlined in the preceding paragraphs have been complemented by circumspect constructs such as 'surveillance society' (Gandy 1989, Lyon 2001), 'the panoptic sort' (Gandy, 1993, 2021), 'ubiquitous transparency' (Brin 1998), 'location and tracking' (Clarke 2001, Clarke & Wigan 2011), 'sousveillance' (from below rather than above, by the weak of the powerful - Mann et al. 2003), 'equiveillance' (Mann 2005), 'uberveillance' (comprehensive and from within - Michael 2006, Michael & Michael 2007), 'surveillance capitalism' (Zuboff 2015) and the 'digital surveillance economy' (Clarke 2019a).

Given the diversity of the phenomena, technologies and interpretations, a working definition needs to encompass all forms, e.g.

Surveillance is the systematic investigation or monitoring of the actions, communications and/or experiences of a person (personal surveillance) or many people (mass surveillance)

Surveillance has been enabled and supported by a wide range of sciences and technologies, as diverse as optics and photography, acoustics and sound engineering, electronic engineering, computer science, cryptology, telecommunications, remote sensing, and biometrics. The remainder of this article considers the contributions to surveillance of the loose cluster of technologies that are currently labelled as AI.

Beyond the categorisation scheme presented above, a framework is needed for the application of AI, in particular to surveillance. The approach used here adopts the seven-fold set of interrogatives originating in rhetoric. In answering the question '(1) Surveillance of What?', the dominant policy concern is about the observation of one or more people. In some contexts, however, the aim may be the monitoring of a physical space such as that in front of a door, within a room, or along a fence or wall, or a virtual space such as a bulletin-board, chat-forum, or communications channel. The activity may have a broad field of view, or the focus may be as specific as a single person or piece of luggage. Each location within a space has an address or coordinates. A mobile physical object (such as a motor vehicle or a mobile phone) or virtual object (such as a data-file) changes location within a space and hence surveillance involves identifying or authenticating the identity of the object, locating it, and tracking its succession of locations.

This suggests the need to broaden the working definition, e.g.:

Surveillance is the systematic investigation or monitoring of the actions, communications and/or experiences of a person (personal surveillance) or many people (mass surveillance), or of spaces or objects particularly where that is conducted to assist in the investigation or monitoring of one or more people

The answer to '(2) For Whom?' may be the individual themselves, or a person who has an interest in the space or object being monitored. Alternatively, it may be an entity with a relationship with the person under surveillance, such as a person in loco parentis or an employer. It may, however, be a third party, and perhaps one unknown to the individual(s) subject to surveillance.

The question '(3) By Whom?' may point to the individual themselves (auto-surveillance), e.g. autobiographical logging and safeguards implemented in or adjacent to a person's home. Alternatively, the active party may be an associate (e.g. fellow householder, or employer), or a third party, possibly unknown. In an era of intense specialisation of services and ubiquitous outsourcing, there is likely to be a network of service-providers. Such organisations act as agents for their customers, but may also act as principals, exploiting the resulting data for their own purposes. The phenomenon of 'tech platforms' has emerged, to enable the financial exploitation of the vast quantities of data arising from high connectivity and largely uncontrolled data collection and reticulation (Taeuscher & Laudien 2018).

The question '(4) Why?' is answered in part by socially-positive motivations, including private security of self, family and assets, and public security against natural disaster, over-crowding, riotous behaviour, preparation for violence, and violent acts. Some motivations are less socially-positive, such as the monitoring of behaviour that does not conform with the values of the observer (whether the activity is conducted by the State, a corporation, an association of individuals seeking to enforce their own moral code on others, or individual vigilantes).

Surveillance is an enabler of action by one party against another. It is increasingly embedded in the architecture and infrastructure of both the public and private sectors, in the form of algorithmic governmentality (Foucault 1991) and machinic judgement (Henman 2021). Even covert surveillance has a deterrent effect on behaviour, or at least a displacement effect where the surveillance is thought to be localised rather than ubiquitous. Where the consequences are thought to be significant, the mere suspicion that covert surveillance might be conducted gives rise to the 'chilling effect', whereby individuals suppress behaviours (Schauer 1978). There is a marked difference in social utility between the suppression of inclinations towards violence and theft, and of non-conformist, inventive and innovative behaviours in scientific, economic, cultural, social and political contexts (Kim 2004).

The '(5) How?' of surveillance is diverse, as discussed earlier, and the '(6) Where?' question is closely related to the considerations discussed in relation to '(1) Of What?'. The question '(7) When?', however, is more challenging, because it requires analysis of multiple elements:


3. AI in Support of Surveillance

This section provides an overview of the origins and the ambiguous and contested nature of AI. The fields that appear to have particular relevance to surveillance are then outlined. That provides a basis for identifying the disbenefits and risks that AI applications to surveillance appear to embody.

3.1 AI in the Abstract

The term Artificial Intelligence was coined in the mid-20th century, based on "the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it" (McCarthy et al. 1955). The word 'artificial' implies 'artefactual' or 'human-made'. Its conjunction with 'intelligence' leaves open the question as to whether the yardstick is 'equivalent to human', 'different from human' or 'superior to human'. Conventionally (Albus 1991, Russell & Norvig 2003, McCarthy 2007):

Intelligence is exhibited by an artefact if it:

  1. evidences perception and cognition of relevant aspects of its environment;
  2. has goals; and
  3. formulates actions towards the achievement of those goals.

Histories of AI (e.g. Russell & Norvig 2009, pp. 16-28, Boden 2016, pp.1-20) identify multiple strands and successive re-visits to much the same territory. Over-enthusiastic promotion has always been a feature of the AI arena. The revered Herbert Simon averred that "Within the very near future - much less than twenty-five years - we shall have the technical capability of substituting machines for any and all human functions in organisations. ... it would be surprising if it were not accomplished within the next decade" (Simon 1960). Unperturbed by ongoing failures, he repeated such predictions throughout the following decades. His mantle was inherited: "by the end of the 2020s [computers will have] intelligence indistinguishable to biological humans" (Kurzweil 2005, p.25). Such repeated exaggerations have resulted in under-delivery against expectations, a cyclical 'boom and bust' pattern of 'AI winters', and existential doubts.

The last decade has seen a(nother) outbreak. Spurred on by the hype, and by the research funding that proponents' promises have extracted, AI has excited activity in a variety of fields. Some of potential significance are natural language understanding, image processing and manipulation, artificial life, evolutionary computation aka genetic algorithms, and artificial emotional intelligence. AI intersects with robotics, to the extent that the software installed in a robot is justifiably regarded as artificially intelligent. Robotics involves two key elements:

Two further, frequently-mentioned elements of robotics are sensors, to enable the gathering of data about the devices's environment; and flexibility, in that the device can both operate using a range of programs, and manipulate and transport materials in a variety of ways. Where robotics incorporates AI elements, the disbenefits and risks sharpen enormously, because of the inherent capacity of a robot to act autonomously in the real world (Clarke 2014b), and the temptation and tendency for the power of decision and action to be delegated to the artefact, whether intentionally or merely by accident.

3.2 Apparently Relevant Fields of AI

Several fields of AI have apparent potential for application to surveillance. Many involve 'pattern recognition', for which four major components are needed: "data acquisition and collection, feature extraction and representation, similarity detection and pattern classifier design, and performance evaluation" (Rosenfeld & Wechsler 2000, p.101).

Pattern recognition can be applied in a variety of contexts, relevantly to surveillance:

The last of these requires closer attention. Common features of the classical approaches to pattern-recognition in data have been that:

  1. data is posited to be a sufficiently close representation of some real world phenomenon
  2. tha data is processed by an algorithm
  3. inferences are drawn from the data
  4. the inferences are claimed to have relevance to the understanding or management of the phenomenon

An algorithm is a procedure, or set of steps. The steps may be serial and invariant. Alternatively, and more commonly, the steps may also include repetition constructs (e.g. 'perform the following steps until a condition is fulfilled') and selection constructs (e.g. 'perform one of the following sets of steps depending on some test'). Since about 1965, the preparation of computing software has been dominated by languages designed to enable the convenient expression of algorithms, sometimes referred to as procedural or imperative languages. Software developed in this manner represents a humanly-understandable solution to a problem, and hence the rationale underlying an inference drawn using it can be readily expressed.

Other approaches to developing software exist (Clarke 1991). Two that are represented as being AI techniques are highly relevant to the issues addressed in the present analysis. The approach adopted in rule-based 'expert systems' is to express a set of rules that apply within a problem-domain. A classic rule-example is:

If <Person> was born within the UK or any of <list of UK Colonies> between <date = 1 Jan 1949> and <date = 31 Dec 1982>, they qualify as <a Citizen of the United Kingdom and Colonies (CUKC)> and hence qualify for a UK passport

When software is developed at this level of abstraction, a model of the problem-domain exists; but there is no explicit statement of a particular problem or a solution to it. In a simple case, the reasoning underlying an inference that is drawn in a particular circumstance may be easy to provide, whether to an executive, an aggrieved person upset about a decision made based on that inference, or a judge. However, this may not be feasible where data is missing, the rulebase is large or complex, the rulebase involves intertwining among rules, the rulebase embodies some indeterminacies and/or decision-maker discretions exist.

A further important software development approach is (generically) machine learning (sometimes referred to as AI/ML), and (specifically) connectionist networks or artificial neural networks (ANNs). ANNs originated in the 1940s in the cognitive sciences, prior to the conception of AI (Medler 1998). They have subsequently been co-opted by AI researchers and are treated as an AI technique. The essence of neural network approaches is that tools, which were probably developed using a procedural or imperative language, are used to process examples taken from some problem-domain. Such examples might comprise the data relevant to 5% of all applicants for UK passports during some time-period who were born in, say, Jamaica, including the results of the applications.

The (probably algorithmic) processing results in a set of weights on the factors that the tool was told were involved in drawing the inference. Although the tool may have been developed using a procedural or imperative language implementing an algorithm, the resulting software that is used to process future cases is appropriately referred to as being empirical. The industry misleadingly refers to it as being algorithmic, and critics have adopted that in terms such as 'algorithmic bias'; but the processing involved is empirically-based, not algorithmic, and hence a more appropriate term is empirical bias, or sample bias.

A critical feature of ANNs is that the process is a-rational, that is to say that there is no reasoning underlying the inference that is drawn, and no means of generating an explanation any more meaningful than 'based on the data, and the current weightings in the software, you don't qualify'. The approach is referred to as 'machine learning' partly because the means whereby the weightings are generated depends on the collection of prior cases that are fed to the tool as its 'learning set'. Hence, in the (to many people, somewhat strange) sense of the term used in this field, the software 'learns' the set of weightings. In addition, the system may be arranged so as to further adapt its weightings (i.e. 'keep learning'), based on subsequent cases.

There are two different patterns whereby the factors and weightings can come about (DeLua 2021). The description above was of supervised learning, in that the factors were fed to the tool by a supervisor ('labelled' or 'tagged'), and in each case the 'right answer' was provided within the data. In the case of unsupervised learning, on the other hand, there are no labels, and 'right answers' are not provided with the rest of the data. The tool uses clusterings and associations to create the equivalent of what a human thinker would call constructs, but without any contextual information, 'experience' or ' common sense' about the real world that the data purports to relate to. On the one hand, 'unsupervised learning' is touted as being capable of discovering patterns and relationships that were not previously known; but, on the other, this greatly exacerbates the enormous scope that already exists with 'supervised learning' for inferences to be drawn that bear little or no relationship to the real world.

The vagaries of 'tagging' and even moreso automated construct creation, coupled with the a-rationality of all AI/ML and its inherently mysterious and inexplicable inferencing, leads people who are not AI enthusiasts to be perturbed and even revulsed by the use of ANNs to make decisions that materially affect people.

3.3 Disbenefits and Risks in AI Applications to Surveillance

A great many claims have been made about the potential benefits AI might offer. Many of these feature vague explanations of the process whereby the benefits would arise. A proportion of the claims have some empirical evidence to support them, but many are untested assertions. The analysis reported here is concerned with the downsides: disbenefits, by which is meant impacts that are predictable and harmful to some party, and risks, that is to say harmful impacts that are contingent on particular conditions arising in particular circumstances.

Pattern-matching of all kinds is inherently probabilistic rather than precise. It results in inferences that include false-positives (wrongly asserting that a match exists) and false-negatives (wrongly asserting the absence of a match). When used carefully, with inbuilt and effective safeguards against misinterpretation, benefits may arise and disbenefits and risks may be manageable. Where safeguards are missing or inadequate, the likelihood of disbenefits and risks arising, and even dominating, increases rapidly. For example, where facial recognition is used for identity authentication, low-quality pattern-matching may cause only limited harm when a device refuses its owner permission to use it, but some alternative authentication mechanism such as a password is readily available. There are many other circumstances in which no alternative is available, the scope for error is high, and serious harm can arise. This is common with uses for identification of individuals presumed to be within populations for which biometric measures have already been recorded, such as at border-crossings.

As regards AI generally, the disbenefits and risks have been presented in many different ways (e.g. Scherer 2016, esp. pp. 362-373, Yampolskiy & Spellchecker 2016, Duursma 2018, Crawford 2021). Clarke (2019b) identifies five factors underlying concerns about AI:

  1. Artefact Autonomy, arising from software making decisions on the basis of automated inferences, and even taking action by means of actuators under the artefact's direct control
  2. Unjustified Assumptions about Data, including its quality and its correspondence with the real-world phenomena it is assumed to represent
  3. Unjustified Assumptions about the Inferencing Process, due to the unsuitability of data as input to the particular inferencing process, failure to demonstrate both theoretically and empirically the applicability of the process to the particular problem-category or problem-domain, and/or assertions that empirical correlation unguided by theory is enough, and that rational explanation is an unnecessary luxury (e.g. Anderson 2008, LaValle et al. 2011, Mayer-Schoenbeger & Cukier 2013)
  4. Opaqueness of the Inferencing Process, in many circumstances at the level of empirically-based a-rationality, which denies the possibility of effective scrutiny
  5. Irresponsibility, in that none of the organisations in the AI supply-chain are subject to effective legal constraints and obligations commensurate with the roles that they play

Item 4, the lack of transparency, has particularly serious implications (Clarke 2019b, pp.428-429). Where no rationale for the outcome exists and none can be convincingly constructed, no humanly-understandable explanation can be provided. The process may also be impossible to replicate, because parameters affecting it may have since changed and the prior state may not be available. This means that the process may not be able to be checked by an independent party such as an auditor, judge or coroner, because records of the initial state, intermediate states and triggers for transitions between states, may not exist and may not be able to be re-constructed, such that the auditability criterion is failed. Where an outcome appears to be in error, the factors that gave rise to it may not be discoverable, and undesired actions may not be correctable. These factors combine to provide entities that have nominal responsibility for a decision or action with an escape clause, in a manner similar to force majeure: AI's opaqueness may be claimed to be a force that is beyond the capacity of a human entity or organisation to cope with, thereby absolving it of responsibility. In short, every test of due process and procedural fairness may be incapable of being satisfied, and accountability destroyed.

In summary, "AI gives rise to errors of inference, of decision and of action, which arise from the more or less independent operation of artefacts, for which no rational explanations are available, and which may be incapable of investigation, correction and reparation" (Clarke 2019b, p.426).

In respect of AI-based data analytics, the quality of outcomes is dependent on many features of data that need to reach a threshold of quality before they can be reliably used to draw inferences (Wang & Strong 1996, Shanks & Darke 1998, Piprani & Ernst 2008, summarised in Clarke 2016 into 13 factors).

As regards process quality, all data analytics techniques embody assumptions about the form that the data takes (such as the scale against which it is measured), and its quality, and the reliability of the assumptions made about the associations between the data and some part of the real world. Again, text-books on data analytics teach almost nothing about the need for, and the techniques that need to be applied to deliver, assurance of inferencing quality.

A third suite of challenges arises in relation to the use of the inferences drawn by data-analytical processes from data-sets. There is a need for:

Yet, despite the substantial catalogue of problems with data meaning, data quality, and inconsistencies among data-sets, data analytics teaching and practice invest a remarkably small amount of effort into quality assurance. That is the case even with long-established forms of data analytics. AI/ML-based data analytics, on the other hand, is commonly entirely incapable of addressing them. Further, the opacity issue overlays all of the other problems. Pre-AI, genuinely 'algorithmic' inferencing is capable of delivering explanations, enabling the various elements of accountability to function. Rule-based 'expert systems' dilute explainability. AI/ML inferencing comprehensively fails the explainability test, and undermines accountability.

The laxness and the breaches of procedural fairness are being carried over into a new world in which decisions are being imposed and actions taken that are incapable of being justified before a court of law. The need for effective regulatory mechanisms is clear. What is far less clear is how protective mechanisms can be structured, and whether they are in place, or at least emergent.


4. Regulatory Alternatives

AI may have very substantial impacts, both good and ill, both intended and accidental, and both anticipated and unforeseen. The purpose of this article is to assess whether the threats inherent in AI applied to surveillance are capable of being appropriately dealt with. This section considers the full array of regulatory possibilities that might contribute to the protection of the public. It does this by applying a regulatory framework proposed in Clarke (2021), with particular reference to the Regulatory Layers in s.2.2 in that article, presented in graphical form in Figure 2. The basic propositions are that natural forms of regulation exist, and that interventions in the form of regulatory measures require justification on the basis that natural regulation is, or is likely to be, inadequate to protect important interests.

Figure 2: A Hierarchy of Regulatory Mechanisms

The foundational layer, (1) Natural Regulation, is a correlate of the natural control processes that occur in biological systems. It comprises natural influences intrinsic to the relevant socio-economic system, such as countervailing power by those affected by an initiative, activities by competitors, reputational effects, and cost/benefit trade-offs. AI is driven by marketing energy and unbounded adopter enthusiasm, and surveillance is an idea in good standing. These appear to go close to negating the effects of natural regulatory processes.

The second-lowest layer, (2) Infrastructural Regulation, is exemplified by artefacts like the mechanical steam governor. It comprises particular features of the infrastructure that embed features that reinforce positive aspects and inhibit negative aspects of the relevant socio-economic system. A popular expression for infrastructural regulation in the context of IT is '[US] West Coast Code' (Lessig 1999, Hosein et al. 2003). Another is security-by-design (Anderson 2020). If privacy-by-design (Cavoukian 2009) is ever articulated, and graduates beyond aspirational status, it would also represent a Layer (2) intervention. However, it appears very challenging to embed safeguards within AI-based software and artefacts (Clarke 1993).

At the uppermost layer of the regulatory hierarchy, (7) Formal Regulation exercises the power of a parliament through statutes and delegated legislation such as Regulations. Formal regulation demands compliance with requirements that are expressed in more or less specific terms, and is complemented by sanctions, enforcement powers and resources, and actual enforcement. Lessig (1999) refers to formal regulation as '[US] East Coast Code'.

Formal regulation may appear the most logical approach to take when confronted by a threat of the magnitude that AI may prove to be. However, IT providers and insufficiently critical user organisations clamour for the avoidance of constraints on innovation. Corporate power has been instrumental in recent decades in greatly reducing regulatory commitment in many jurisdictions and in many contexts. In the private sector, de-regulation and 'better regulation' movements have achieved ratcheting back of existing controls, and safeguards have also been avoided through the outsourcing of both activities and responsibilities, including the use of low-regulation havens, and jurisdictional arbitrage. In the public sector, key factors include the drift from subcontracting, via comprehensive outsourcing, to public-private partnerships, and on towards the corporatised state (Schmidt & Cohen 2014). A particular factor that appears to have largely 'flown under the radar' to date is the conversion of locally-installed software products to remote-provided services (AI as a Service - AIaaS), of which IBM's Watson was an early exemplar.

Several intermediate forms lie between the informal and formal ends of the regulatory heirarchy. Examples of (3) Organisational Self-Regulation include internal codes of conduct and 'customer charters', and self-restraint associated with expressions such as 'business ethics' and 'corporate social responsibility' (Parker 2002). Layer (4) Industry Sector Self-Regulation involves schemes that express technical or process standards, codes of conduct, or of practice, or of ethics, and industry Memoranda of Understanding (MoUs). These commonly lack any meaningful impact, and are primarily a means to create an appearance of safeguards and thereby avoid formal regulatory activity (Braithwaite 2017).

Other, intermediate forms have emerged that have been claimed to offer greater prospects of achieving regulatory objectives. These are clustered into layer (6) Meta- and Co-Regulation. In many areas, convincing arguments can reasonably be made by regulatees to the effect that government is poorly placed to cope with the detailed workings of complex industry sectors and/or the rate of change in industries' structures, technologies and practices. Hence, the argument proceeds, parliaments should legislate the framework, objectives and enforcement mechanisms, but delegate the articulation of the detailed requirements. In practice, few examples of effective layer (6) designs exist, because the interests of regulatees dominate, advocates for the nominal beneficiaries lack influence and in many cases are not even at the table, and the powers of regulators are so weak that the resulting 'enforceable Codes' are almost entirely ineffective. For this reason, the framework in Figure 2 also identifies the commonly-experienced layer (5) Pseudo Meta- and Co-Regulation.

Despite the sceptical tone of the above analysis, several techniques in the mid-layers of the heirarchy might make contributions within a complex of safeguards. Organisational risk assessment and management is one such technique. However, it is almost entirely inward-looking, and considers risks only from the viewpoint of the organisation itself. For example, the focus of the relevant ISO Standards series (31000) is on 'managing risks faced by organizations'. Harm to stakeholders is only within-scope where the stakeholder has sufficient power to undermine fulfilment of the organisation's objectives. It is highly desirable that risk assessment and management processes also be conducted for those stakeholders that have legitimacy but lack power (Achterkamp & Vos 2008). Although multi-stakeholder risk assessment is feasible (Clarke 2019b), it remains highly unusual.

Another approach to identifying or anticipating potential harm, and devising appropriate safeguards, is impact assessment. This is a family of techniques that is maturing in the environmental context, understood in the privacy arena in theory but to date very poorly applied (Clarke 2009a), and emergent in broader areas of social concern (Becker & Vanclay 2003). Impact assessment has also been described in the specific context of surveillance (Wright & Raab 2012). However, there is no impetus for any such process to be undertaken, and little likelihood of such approaches assisting powerless stakeholders harmed by AI.

A further possible source of protection might be application of 'the precautionary principle' (Wingspread, 1998). Its strong form exists in some jurisdictions' environmental laws, along the lines of: "When human activities may lead to morally unacceptable harm that is scientifically plausible but uncertain, actions shall be taken to avoid or diminish that potential harm" (TvH, 2006). In the context of AI, on the other hand, the 'principle' has achieved no higher status than an ethical norm to the effect that: 'If an action or policy is suspected of causing harm, and scientific consensus that it is not harmful is lacking, then the burden of proof falls on those taking the action'.

Finally, the notion of 'ethically-based principles' has been popular during the latter part of the decade to 2020, with a wave of 'Principles for Responsible AI' published. The documents range from trivial public relations documents from ICT corporations to serious-minded proposals from policy agencies and public interest advocates. Several catalogues have been developed, and analyses undertaken, e.g. Zeng et al. (2019), Clarke (2019b) and Jobin et al. (2019).

The second of those analyses developed a consolidated super-set of 50 Principles. This was then used as a basis for assessing the coverage of the 30 individual sets from which it was derived. The assessment awarded 1 point for each Principle if the idea was in some way evident, even if only some of the Principle was addressed, and irrespective of the strength of the prescription. Even using that liberal approach, however, "the main impression is of sparseness, with remarkably limited consensus [even on quite fundamental requirements], particularly given that more than 60 years have passed since AI was first heralded" (Clarke 2019b, p.415). "[Each set] reflected on average only 10 of the 50 Principles", "each of the 50 Principles was reflected [on average] in only 6 of the 30 documents", and "only 3 source-documents achieved moderately high scores [46%, 58% and 74%, with the remainder in the range 8%-34%]" (all on p.416).

Weak government guidelines are exploited by business enterprises, and may be designed to do just that. For example, a set of 'AI Ethics Principles' published by the Australian Department of Industry (DI 2019) was within 18 months worked into internal policies of the two largest banks and the privatised PTT (Hendry 2021). This was despite the extremely weak score that the Department of Industry's document scored against the 50 Principles - in the range 26-40% (Clarke 2019c), and the complete absence of any formal regulatory framework, i.e. it is merely a Layer (3)-(4) publication gilded with a government agency's name. The document is difficult to distinguish from a public relations tool intended to mislead the public into trusting AI and thereby avoiding regulatory intervention.

The likelihood of any combination of Layer (1)-(5) elements providing protection for public interests against the ravages of AI appears very low. What, then, are the prospects of effective interventions at Layers (6) and (7), Formal, Meta- and Co-Regulation?


5. The Possibility of Formal Regulation of AI in Surveillance

The 'ethical guidelines' phase had its focus solely on 'principles', and not at all on means whereby any compulsion to apply them might arise, least of all enforceability. A new phase was ushered in by a proposal for statutory intervention published by the European Commission (EC) in April 2021. This is sufficiently significant that the Proposal is evaluated here as a proxy for formal regulation generally.

5.1 The European Commission's Proposal

The EC's announcement was of "new rules and actions for excellence and trust in Artificial Intelligence", with the intention to "make sure that Europeans can trust what AI has to offer". The document's title was a 'Proposal for a Regulation on a European approach for Artificial Intelligence' (EC 2021), and the draft statute is termed the Artificial Intelligence Act (AIA).

The EC's contribution to the 'ethical guidelines' phase had been the document that scored most highly against the consolidated set of 50 Principles. This was the "Ethics Guidelines for Trustworthy AI" prepared by a "High-Level Expert Group on Artificial Intelligence" (EC 2019). Using the liberal scoring method applied in that evaluation, this achieved a score of 74% against the 50 Principles.

The document of 2021 is formidable, and the style variously eurocratic and legalistic. It comprises an Explanatory Memorandum, pp. 1-16, a Preamble, in 89 numbered paragraphs on pp. 17-38, and the proposed Regulation, in 85 numbered Articles on pp. 38-88, supported by 15 pages of Annexes.

A first difficulty the document poses is that the term "AI System" is defined in a manner inconsistent with mainstream usage. It omits various forms of AI, and encompasses various forms of data analytics that are not AI. A more descriptive term for the proposed statute would be 'Data Analytics Act'.

The EC proposes different approaches for each of four categories of AI (qua data analytics), which it terms 'Levels of Risk'. A few "AI Practices" would be prohibited (Art. 5). A number of categories of "High-Risk AI Systems" would be subject to a range of provisions (Arts. 6-7, 8-51, Annexes II-VII). A very limited transparency requirement would apply to a small number of categories of "AI Systems" (Art. 52). All other "AI Systems" would escape regulation by the AIA.

The consolidated set of 50 Principles was used to assess the sole category to which safeguards would apply, "High-Risk AI Systems". A comprehensive report is provided in an unpublished working paper (Clarke 2021b). The EC Proposal was found to make a contribution of some kind to only 50% of the 50 Principles. Its worst failings are in relation to foundational issues, with Theme 1 (Assess Positive and Negative Impacts and Implications) at 22%, 2 (Complement Humans) at 0%, 3 (Ensure Human Control) at 57%, and 4 (Ensure Human Safety and Wellbeing) at 33%. Also scoring badly are 7 (Embed Quality Assurance) at 33%, and 10. (Enforce, and Accept Enforcement of, Liabilities and Sanctions) at 50%.

Given that it is a proposal for law, the liberal scoring scheme is much less appropriate than it is for mere 'ethical guidelines'. A separate score was accordingly assigned that reflects the extent of coverage, the scope of exemptions, exceptions and qualifications, and the apparent practicability of enforcement. As a proposed statutory measure, the EC Proposal was found to be highly deficient. It scores only 14.7 / 50 (29%). 3/50 Principles score the available 1.0, 14/50 score between 0.5 and 0.9, 8/50 score below 0.5, and 25 score 0.0. The foundational Themes 1-4 score an even more Serious Fail, with 4%, 0%, 33% and 28%, for a total of 20%. The only 3 of the 10 Themes with anything resembling a Pass-level score are 8 (Exhibit Robustness and Resilience - 68%), 5 (Ensure Consistency with Human Values and Human Rights - 60%), and 9 (Ensure Accountability for Obligations - 50%).

The disjunction between the EC Proposal (EC 2021) and the earlier 'Ethics Guidelines for Trustworthy AI' (EC 2019) is striking. Key expressions in the earlier document, such as 'Fairness', 'Prevention of Harm', 'Human Autonomy', 'Human agency', 'Explicability', 'Explanation', 'Well-Being' and 'Auditability', are nowhere to be seen in the body of the 2021 Proposal. The term 'stakeholder participation' occurs a single time (and even then as a merely optional feature in organisations' creation process for voluntary codes). The term 'auditability' occurs, but not in a manner relating to providers or users, i.e. not as a regulatory concept.

The conclusion reached by the assessment was that "the EC's Proposal is not a serious attempt to protect the public. It is very strongly driven by economic considerations and administrative convenience for business and government, with the primary purposes being the stimulation of the use of AI systems. The public is to be lulled into accepting AI systems under the pretext that protections exist. The social needs of the affected individuals have been regarded as a constraint not an objective. The many particularities in the wording attest to close attention being paid to the pleadings of advocates for the interests of providers of AI-based goods and services, and of government agencies and corporations that want to apply data analytics techniques with a minimum of interference. The draft statute seeks the public's trust, but fails to deliver trustworthiness" (Clarke 2021b).

5.2 The Proposal's Implications for AI and Surveillance

Assessment was undertaken of the extent to which the EC's Proposal affects AI applications to surveillance. Relevant extracts from EC (2021) are provided in an Annex to this article.

Of the four categories of Prohibited AI Practices (Art. 5), two are related to surveillance: (c) Social scoring ("the evaluation or classification of the trustworthiness of natural persons ..."), and (d) 'Real-Time' remote biometric identification in public places for law enforcement. The scope of these prohibitions is, however, subject to substantial qualifications. For example, biometric identification is not prohibited if it is concerned with any of (i) retrospective or even prospective rather than 'real-time' (contemporaneous) identification, (ii) proximate rather than remote identification, (iii) authentication (1:1 matching) rather than identification (1:many matching), (iv) non-public rather than public places, (v) any use other than law enforcement, as narrowly-defined in Art.3(41) to refer only to criminal and not administrative matters, or "strictly necessary" for any of (vi) "targeted search for specific potential victims of crime", (vii) "the prevention of a specific, substantial and imminent threat to the life or safety of natural persons ...", or (viii) "the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence [with a custodial sentence of at least three years]".

In addition, scope exists for 'gaming' the regulatory scheme, because, despite the need for "prior authorisation granted by a judicial authority or by an independent administrative authority ... issued upon a reasoned request", "use of the system" can "in a duly justified situation of urgency ... be commenced without an authorisation and the authorisation may be requested only during or after the use" (Art.5(3)). Hence a nominally "prohibited AI practice" can be developed, deployed in particular contexts, then withdrawn, without a prior or even contemporaneous application for authorisation, let alone approval. Moreover, any Member State can override the prohibition (Art.5(4)). Many systems in these categories that fit, or can be fitted into, these exemptions will be entirely free of any regulation under the EC Proposal.

Multiple 'High-Risk AI Systems' (Arts. 6-7, 8-51, Annexes II-VII, particularly III) are also relevant to surveillance. Some instances of "1. Biometric identification and categorisation of natural persons" (Annex III-1) are defined in, but only in respect of those that satisfy all of the following criteria: "[i] intended to be used for [ii] the `real-time' and [iii] `post' [iv] remote [v] biometric [vi] identification [vii] of natural persons" (III-1(a)). The exemptions include, for example, all systems used without 'intent', either 'real-time' or 'post' but not both, proximate rather than 'remote', and for authentication (1-to-1) rather than identification (1-among-many).

Some spatial surveillance is defined to be within this category, as "2. Management and operation of critical infrastructure", but only where the system is a "safety component" in "the management and operation of road traffic and the supply of water, gas, heating and electricity". If the system, for example, draws inferences about people's usage of roads, or households' usage of energy, it is not high-risk, and hence subject to no regulatory protections under the EC Proposal.

More positively, a system is subject to some conditions if it is "5. ... intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services" or "intended to be used to evaluate the creditworthiness of natural persons". However, there are exemptions for (i) systems used without intent, (ii) 'private services and benefits' in their entirety (e.g. privately-run school-bus services, even if 'essential') - despite the nominal inclusion of "private services" in the "area" - and (iii) use for evaluation but without at least one of grant, reduce, revoke, or reclaim (due to the use of the conjunction 'as well as'). A number of very specific categories of "6. Law enforcement" and "7. Migration, asylum and border control management" uses are also defined in.

Even for those systems that do not fit into the array of escape-clauses, the statutory obligations (Art. 8) are very limited in comparison with those in the consolidated set of 50 Principles, comprising only:

Further, all such systems are either absolved from undergoing conformity assessment (Art.41) or permitted to self-assess unless subject to comparable requirements under other legislation (Arts. 42, 43, 47). It would appear that considerably more effort will be expended in finding ways to avoid the requirements than in compying with them.

Finally, of the four categories of AI systems to which a limited transparency obligation applies (Art.52), two are related to surveillance", but are heavily qualified: 2. "an emotion recognition and/or detection system" except where "permitted by law to detect, prevent and investigate criminal offences, unless those systems are available for the public to report a criminal offence", and 3. "a biometric categorisation system", separately described as "an AI system to determine association with (social) categories based on biometric data", except where "permitted by law to detect, prevent and investigate criminal offences".

The large majority of applications of AI to surveillance would be entirely unaffected by the EC Proposal should it be enacted in anything resembling its current form. This includes many applications of AI that lie very close to the boundary of what the EC considers should be prohibited, and many applications that the EC considers to be 'high-risk'. Even those 'high-risk' applications that are subject to the new law would be subject to very weak requirements. Advocates for the public interest are justified in treating the EC Proposal with derision, both generally in respect of AI, and specifically in relation to the application of AI to surveillance.


6. Conclusions

There is evidence of data analytics practices in general not being subject to adequate safeguards for public interests, even before AI's incursions into the field. Prominent examples include RoboDebt in Australia (Clarke 2020), and the Allowance Affair in the Netherlands - Erdbrink 2021). This article summarised many signs of alarm about the damage AI can do. To the formal evidence can be added the implicit recognition by AI proponents that the public has much to fear, in the form of a 'charm offensive' involving good news stories about AI applications, and utterances of 'principles' further glossed by the word 'ethical'.

A review of the many forms that regulation can take found nothing outside the uppermost layers of formal regulation that would appear at all likely to deliver meaningful safeguards for the public against AI. As regards formal regulation, until the second quarter of 2021, there was no sign of any activity. The first proposal, from the EC, when reviewed against a consolidated set of 'principles for responsible AI', has been found to be extremely poor.

Given these inadequacies, and the power of the government agencies and corporations that apply surveillance, the current prospects of effective control being achieved over AI applications to surveillance are extremely low. Either very prompt action is needed to elevate both the urgency and the quality of proposals, or regulatory protections will come long after the damage has commenced, and in the form of ill-considered, kneejerk reactions to that damage.


Reference List

Achterkamp M.C. & Vos J.F.J. (2008) 'Investigating the Use of the Stakeholder Notion in Project Management Literature: A Meta-Analysis' International Journal of Project Management 26 (2008) 749-757

Albus J. S. (1991) 'Outline for a theory of intelligence' IEEE Trans. Systems, Man and Cybernetics 21, 3 (1991) 473-509, at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.410.9719&rep=rep1&type=pdf

Anderson C. (2008) 'The End of Theory: The Data Deluge Makes the Scientific Method Obsolete' Wired Magazine 16:07, 23 June 2008, at http://archive.wired.com/science/discoveries/magazine/16-07/pb_theory

Anderson R. (2020) 'Security Engineering: A Guide to Building Dependable Distributed Systems', 3rd Edition, Wiley, 2020

Becker H. & Vanclay F. (2003) 'The International Handbook of Social Impact Assessment' Cheltenham: Edward Elgar, 2003

Bentham J. (1791) 'Panopticon; or, the Inspection House', London, 1791

Boden M. (2016) 'AI: Its Nature and Future' Oxford University Press, 2016

Braithwaite J. (2017) 'Types of responsiveness' Chapter 7 in Drahos (2017), pp. 117-132, at http://press-files.anu.edu.au/downloads/press/n2304/pdf/ch07.pdf

Brin D. (1998) 'The Transparent Society' Addison-Wesley, 1998

Cavoukian A. (2009) 'Privacy by Design: The 7 Foundational Principles' Privacy By Design, 2010, at http://www.privacybydesign.ca

Clarke R. (1988) 'Information Technology and Dataveillance' Commun. ACM 31,5 (May 1988) 498-512, PrePrint at http://www.rogerclarke.com/DV/CACM88.html

Clarke R. (1991) 'A Contingency Approach to the Application Software Generations' Database 22, 3 (Summer 1991) 23-34, PrePrint at http://www.rogerclarke.com/SOS/SwareGenns.html

Clarke R. (1993) 'Asimov's Laws of Robotics: Implications for Information Technology' IEEE Computer 26,12 (December 1993) pp.53-61 and 27,1 (January 1994), pp.57-66, PrePrint at http://www.rogerclarke.com/SOS/Asimov.html

Clarke R. (1994) 'Human Identification in Information Systems: Management Challenges and Public Policy Issues' Information Technology & People 7,4 (December 1994) 6-37, at http://www.rogerclarke.com/DV/HumanID.html

Clarke R. (2001) 'Person-Location and Person-Tracking: Technologies, Risks and Policy Implications' Information Technology & People 14, 2 (Summer 2001) 206-231, PrePrint at http://www.rogerclarke.com/DV/PLT.html

Clarke R. (2008) 'Web 2.0 as Syndication' Journal of Theoretical and Applied Electronic Commerce Research 3,2 (August 2008) 30-43, PrePrint at http://www.rogerclarke.com/EC/Web2C.html

Clarke R. (2009a) 'Privacy Impact Assessment: Its Origins and Development' Computer Law & Security Review 25, 2 (April 2009) 123-135, PrePrint at http://www.rogerclarke.com/DV/PIAHist-08.html

Clarke R. (2009b) 'Framework for Surveillance Analysis' Xamax Consultancy Pty Ltd, August 2009, at http://www.rogerclarke.com/DV/FSA.html

Clarke R. (2014) 'Privacy and Social Media: An Analytical Framework' 'Journal of Law, Information and Science 23,1 (April 2014) 1-23, PrePrint at http://www.rogerclarke.com/DV/SMTD.html

Clarke R. (2014b) 'What Drones Inherit from Their Ancestors' Computer Law & Security Review 30, 3 (June 2014) 247-262, PrePrint at http://www.rogerclarke.com/SOS/Drones-I.html

Clarke R. (2016) 'Big Data, Big Risks' Information Systems Journal 26, 1 (January 2016) 77-90, PrePrint at http://www.rogerclarke.com/EC/BDBR.html

Clarke R. (2019a) 'Risks Inherent in the Digital Surveillance Economy: A Research Agenda' Journal of Information Technology 34,1 (Mar 2019) 59-80, PrePrint at http://www.rogerclarke.com/EC/DSE.html

Clarke R. (2019b) 'Why the World Wants Controls over Artificial Intelligence' Computer Law & Security Review 35, 4 (2019) 423-433, PrePrint at http://www.rogerclarke.com/EC/AII.html

Clarke R. (2019c) 'Principles and Business Processes for Responsible AI' Computer Law & Security Review 35, 4 (August 2019) 410-422, PrePrint at http://www.rogerclarke.com/EC/AIP.html

Clarke R. (2019) 'The Australian Department of Industry's 'AI Ethics Principles' of September / November 2019: Evaluation against a Consolidated Set of 50 Principles' Xamax Consultancy Pty Ltd, November 2019, at http://www.rogerclarke.com/EC/AI-Aust19.html

Clarke R. (2020a) 'Centrelink's Big Data 'Robo-Debt' Fiasco of 2016-20Xamax Consultancy Pty Ltd, 2018-20, at http://www.rogerclarke.com/DV/CRD17.html

Clarke R. (2020b) 'Auto-Surveillance' Xamax Consultancy Pty Ltd, December 2020, at http://rogerclarke.com/DV/AutoSurv.html

Clarke R. (2021a) 'A Comprehensive Framework for Regulatory Regimes as a Basis for Effective Privacy Protection' Proc. 14th Computers, Privacy and Data Protection Conference (CPDP'21), Brussels, 27-29 January 2021, PrePrint at http://rogerclarke.com/DV/RMPP.html

Clarke R. (2021b) 'The EC's Proposal for Regulation of AI: Evaluation against a Consolidated Set of 50 Principles' 'Xamax Consultancy Pty Ltd, August 2021, at http://www.rogerclarke.com/DV/AIP-EC21.html

Clarke R. & Wigan M. (2011) 'You Are Where You've Been: The Privacy Implications of Location and Tracking Technologies' Journal of Location Based Services 5, 3-4 (December 2011) 138-155, PrePrint at http://www.rogerclarke.com/DV/YAWYB-CWP.html

Cole S.A. (2004) 'History of Fingerprint Pattern Recognition' Ch.1, pp 1-25, in Ratha N. & Bolle R. (eds.) 'Automatic Fingerprint Recognition Systems', SpringerLink, 2004

Daugman J. (1998) 'History and Development of Iris Recognition', at http://www.cl.cam.ac.uk/users/jgd1000/history.html

DeLua J. (2021) 'Supervised vs. Unsupervised Learning: What's the Difference?' IBM, 12 March 2021, at https://www.ibm.com/cloud/blog/supervised-vs-unsupervised-learning

DI (2019) 'AI Ethics Principles' Department of Industry, Innovation & Science, 2 September 2019, at https://www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework/ai-ethics-principles

Drahos P. (ed.) (2017) 'Regulatory Theory: Foundations and applications' ANU Press, 2017, at https://press.anu.edu.au/publications/regulatory-theory#pdf

Duursma (2018) 'The Risks of Artificial Intelligence' Studio OverMorgen, May 2018, at https://www.jarnoduursma.nl/the-risks-of-artificial-intelligence/

EC (2019) 'Ethics Guidelines for Trustworthy AI' High-Level Expert Group on Artificial Intelligence, European Commission, April 2019, at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=58477

EC (2021) 'Proposal for a Regulation on a European approach for Artificial Intelligence' European Commission, 21 April 2021, at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=75788

EC (2021b) ' Document 52021PC0206 ' European Commission, viewed 14 July 2021, at https://eur-lex.europa.eu/legal-content/EN/HIS/?uri=COM:2021:206:FIN

Erdbrink T. (2021) 'Government in Netherlands Resigns After Benefit Scandal' The New York Times, 15 Jan 2021, at https://www.nytimes.com/2021/01/15/world/europe/dutch-government-resignation-rutte-netherlands.html

Foucault M. (1977) 'Discipline and Punish: The Birth of the Prison' Peregrine, London, 1975, trans. 1977

Foucault M. (1991) 'Governmentality' in 'The Foucault Effect: Studies in Governmentality' G. Burchell, C. Gordon, & P. Miller (eds.), The University of Chicago Press, 1991, pp. 87-104

Fuchs C. (2011) 'Web 2.0, Prosumption, and Surveillance' Surveillance & Society 8,3 (2011) 288-309, at https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/download/4165/4167

Gandy O.H. (1989) 'The Surveillance Society: Information Technology and Bureaucratic Social Control' Journal of Communication 39, 3 (Summer 1989), at https://www.dhi.ac.uk/san/waysofbeing/data/data-crone-gandy-1989.pdf

Gandy O.H. (1993) 'The Panoptic Sort: Critical Studies in Communication and in the Cultural Industries' Westview, Boulder CO, 1993

Gandy O.H. (2021) 'The Panoptic Sort: A Political Economy of Personal Information' Oxford University Press, 2021

Gose E., Johnsonbaugh R. & Jost S. (1996) 'Pattern recognition and image analysis' Prentice Hall, 1996

Hendry J. (2021) 'Telstra creates standards to govern AI buying, use' itNews, 15 July 2021, at https://www.itnews.com.au/news/telstra-creates-standards-to-govern-ai-buying-use-567005

Henman P. (2021) 'Governing by Algorithms and Algorithmic Governmentality: Towards machinic judgement' in 'The Algorithmic Society: Technology, Power, and Knowledge' Schuilenburg M. & Peeters R. (eds.) Routledge, 2021, pp. 19-34

Hosein G. & Palow C.W. (2013) 'Modern Safeguards for Modern Surveillance: An Analysis of Innovations in Communications Surveillance Techniques' Ohio State L.J. 74, 6 (2013) 1071-1104, at https://kb.osu.edu/bitstream/handle/1811/71608/OSLJ_V74N6_1071.pdf?sequence=1

Hosein G., Tsavios P. & Whitley E. (2003) 'Regulating Architecture and Architectures of Regulation: Contributions from Information Systems' International Review of Law, Computers and Technology 17, 1 (2003) 85-98

Indurkhya N. & Damerau F.J. (eds.) (2010) 'Handbook of natural language processing' CRC Press, 2010

Jepson T. (2104) 'Reversing the Whispering Gallery of Dionysius: A Short History of Electronic Surveillance in the U.S.' Technology's Stories 2,1 (April 2014), at https://www.technologystories.org/2014/04/

Jobin A., Ienca M. & Vayena E. (2019) 'The global landscape of AI ethics guidelines' Nature Machine Intelligence 1 (September 2019) 389-399, at https://doi.org/10.1038/s42256-019-0088-2

Kim M.C. (2004) 'Surveillance technology, Privacy and Social Control' International Sociology 19, 2 (2004) 193-213

Kurzweil R. (2005) 'The Singularity is Near' Viking Books, 2005

LaValle S., Lesser E., Shockley R., Hopkins M.S. & Kruschwitz N. (2011) 'Big Data, Analytics and the Path From Insights to Value' Sloan Management Review (Winter 2011Research Feature), 21 December 2010, at http://sloanreview.mit.edu/article/big-data-analytics-and-the-path-from-insights-to-value/

Lessig L. (1999) 'Code and Other Laws of Cyberspace' Basic Books, 1999

Lyon D. (2001) 'Surveillance Society: Monitoring in Everyday Life' Open University Press, 2001

McCarthy J. (2007) 'What is artificial intelligence?' Department of Computer Science, Stanford University, November 2007, at http://www-formal.stanford.edu/jmc/whatisai/node1.html

McCarthy J., Minsky M.L., Rochester N. & Shannon C.E. (1955) 'A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence' Reprinted in AI Magazine 27, 4 (2006), at https://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1904/1802

Mann S. (2005) 'Equiveillance: The equilibrium between Sur-veillance and Sous-veillance' Opening Address, Computers, Freedom and Privacy, 2005, at http://wearcam.org/anonequity.htm

Mann S., Nolan J. & Wellman B. (2003) ' Sousveillance: Inventing and Using Wearable Computing Devices for Data Collection in Surveillance Environments' Surveillance & Society 1, 3 (2003) 331-355, at https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/3344/3306

Marklund A. & Skouvig L. (eds.) (2021) 'Histories of Surveillance from Antiquity to the Digital Era: The Eyes and Ears of Power' Routledge, 2021

Mayer-Schonberger V. & Cukier K. (2013) 'Big Data: A Revolution That Will Transform How We Live, Work and Think' John Murray, 2013

Medler D.A. (1998) 'A Brief History of Connectionism' Neural Computing Surveys 1, 2 (1998) 18-72, at https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.7504&rep=rep1&type=pdf

Michael M.G. (2006) 'Consequences of Innovation' Unpublished Lecture Notes No. 13 for IACT405/905 - Information Technology and Innovation, School of Information Technology and Computer Science, University of Wollongong, Australia, 2006

Michael M.G. & Michael K. (2007) 'A Note on Uberveillance' Chapter in Michael K. & Michael M.G. (2007), at https://works.bepress.com/kmichael/48/https://works.bepress.com/kmichael/48/

Nakashima N. & Warrick J. (2013) 'For NSA chief, terrorist threat drives passion to 'collect it all'' The Washington Post, 14 July 2013, at https://www.washingtonpost.com/world/national-security/for-nsa-chief-terrorist-threat-drives-passion-to-collect-it-all/2013/07/14/3d26ef80-ea49-11e2-a301-ea5a8116d211_story.html

O'Shaughnessy D. (2008) 'Automatic speech recognition: History, methods and challenges' Pattern Recognition 41,10 (2008) 2965-2979

Pal S.K. & Mitra P. (2004) 'Pattern Recognition Algorithms for Data Mining' Chapman & Hall, 2004

Petersen J.K. (2012) 'Handbook of Surveillance Technologies' Taylor & Francis, 3rd Edition, 2012

Piprani B. & Ernst D. (2008) 'A Model for Data Quality Assessment' Proc. OTM Workshops (5333) 2008, pp 750-759

Rosenfeld A. & Wechsler H. (2000) 'Pattern Recognition: Historical Perspective and Future Directions' Int. J. Imaging Syst. Technol. 11 (2000) 101-116, at http://xrm.phys.northwestern.edu/research/pdf_papers/2000/rosenfeld_ijist_2000.pdf

Russell S.J. & Norvig P. (2009) 'Artificial Intelligence: A Modern Approach' Prentice Hall, 3rd edition, 2009

Ryan A., Cohn J., Lucey S., Saragih J., Lucey P., la Torre F.D. & Rossi A. (2009) 'Automated Facial Expression Recognition System' Proc. Int'l Carnahan Conf. on Security Technology, 2009, pp.172-177, at https://www.researchgate.net/profile/Jeffrey-Cohn/publication/224082157_Automated_Facial_Expression_Recognition_System/links/02e7e525c3cf489da1000000/Automated-Facial-Expression-Recognition-System.pdf

Schauer F. (1978) 'Fear, Risk and the First Amendment: Unraveling the Chilling Effect' Boston University Law Review 58 (1978) 685-732, at http://scholarship.law.wm.edu/facpubs/879

Scherer M.U. (2016) 'Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies' Harvard Journal of Law & Technology 29, 2 (Spring 2016) 353-400, at http://euro.ecom.cmu.edu/program/law/08-732/AI/Scherer.pdf

Shanks G. & Darke P. (1998) 'Understanding Data Quality in a Data Warehouse' The Australian Computer Journal 30 (1998) 122-128

Simon H.A. (1960) 'The Shape of Automation' reprinted in various forms, 1960, 1965, quoted in Weizenbaum J. (1976), pp. 244-245

Taeuscher K. & Laudien S.M. (2018) 'Understanding platform business models: A mixed methods study of marketplaces' European Management Journal 36, 3 (June 2018) 319-329, at https://www.researchgate.net/profile/Karl_Taeuscher/publication/316667830_Understanding_Platform_Business_Models_A_Mixed_Methods_Study_of_Digital_Marketplaces/links/59833097a6fdcc6d8be0c6b3/Understanding-Platform-Business-Models-A-Mixed-Methods-Study-of-Digital-Marketplaces.pdf

TvH: Telstra Corporation Limited v Hornsby Shire Council. NSWLEC 133, 101-107, 113, 125-183 (2006), at http://www.austlii.edu.au/au/cases/nsw/NSWLEC/2006/133.htm

Wang R.Y. & Strong D.M. (1996) 'Beyond Accuracy: What Data Quality Means to Data Consumers' Journal of Management Information Systems 12, 4 (Spring, 1996) 5-33

Wingspread (1998) 'Wingspread statement on the precautionary principle' Science & Environmental Health Network, 1998, at https://www.sehn.org/precautionary-principle-understanding-science-in-regulation

Wright D. & Raab C.D. (2012) 'Constructing a surveillance impact assessment' Computer Law & Security Review 28, 6 (December 2012) 613-626l, at https://www.dhi.ac.uk/san/waysofbeing/data/data-crone-wright-2012a.pdf

Yampolskiy R.V. & Spellchecker M.S. (2016) 'Artificial Intelligence Safety and Cybersecurity: a Timeline of AI Failures' arXiv, 2016, at https://arxiv.org/pdf/1610.07997

Schmidt E. & Cohen J. (2014) 'The New Digital Age: Reshaping the Future of People, Nations and Business' Knopf, 2013

Zeng Y., Lu E. & Huangfu C. (2019) 'Linking Artificial Intelligence Principles' Proc. AAAI Workshop on Artificial Intelligence Safety (AAAI-Safe AI 2019), 27 January 2019, at https://arxiv.org/abs/1812.04814

Zuboff S. (2015) 'Big other: Surveillance capitalism and the prospects of an information civilization' Journal of Information Technology 30, 1 (2015) 75-89, at https://cryptome.org/2015/07/big-other.pdf


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor associated with the Allens Hub for Technology, Law and Innovation in UNSW Law, and a Visiting Professor in the Research School of Computer Science at the Australian National University.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 29 April 2021 - Last Amended: 9 October 2021 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/DV/AIP-S.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2021   -    Privacy Policy