Roger Clarke's Web-Site

 

© Xamax Consultancy Pty Ltd,  1995-2021


Roger Clarke's ' Surveillance and Responsible AI?'

The Prospects of Achieving Responsible Application of AI in Surveillance

Emergent Draft of 7 September 2021

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2021

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://rogerclarke.com/DV/AIP-S.html


Abstract

The scope for harm arising from AI has been recognised by technology providers, user organisations, policy-makers and the public alike. On the other hand, action to risk-manage AI's application has been almost entirely absent. Many users of ICT for surveillance purposes have been successful in avoiding meaningful regulation of their activities. This article examines the prospects for AI's use for surveillance being brought under control.

The article commences by mapping the applications of AI to the many categories of surveillance, utilising the framework for surveillance analysis in Clarke (2009). A review is then undertaken of the ways in which control might be exercised over AI applications to surveillance, by applying the regulatory framework in Clarke (2021).

It is noted, however, that recent decades have seen ongoing reductions in regulatory commitment in many jurisdictions and in many contexts. In the private sector, factors include the de-regulation and 'better regulation' movements to ratchet back existing controls, the outsourcing of both activities and responsibilities, including the use of low-regulation havens, and jurisdictional arbitrage. In the public sector, key aspects include the drift from subcontracting, to comprehensive outsourcing, to public-private partnerships, and on towards the corporatised state. A particular feature that appears to have largely 'flown under the radar' to date is the conversion of software products to services, resulting in AI as a Service (AIaaS).

A summary is provided of the wave of publishing activity during the period 2015-21 in the area of 'Principles for Responsible AI', including both trivial public relations documents from ICT corporations and serious-minded proposals from policy agencies and public interest advocates. The analysis draws on a consolidated super-set of Principles published in Clarke (2019), and several evaluations that have been undertaken of particular proposals against that standard. The applicability of the super-set to AI applications to surveillance is considered, and the scope for articulating a more specific suite of requirements is investigated.

All of the publications to date have been 'Guidelines', lacking enforceability, and in most cases having little or no volitional power that might materially influence AI practice. A critique is provided of the proposals of 21 April 2021 of the European Commission (EC 2021), which appear to be a world-first initiative to establish formal regulation of a bespoke nature for AI applications.

The article concludes with a sober assessment of the prospects of effective control being achieved over AI applications to surveillance even by organisations with limited market and institutional power, let alone by large corporations and government agencies, and in the (flexibly-defined) area of 'national security'.


Contents


1. Introduction

TEXT


2. A Framework for Surveillance Analysis

This section adopts a framework for surveillance analysis previously proposed in Clarke (2009). This commences by considering the contexts of surveillance. The original sense of the word, adopted from French , was of 'watching over', possibly watching over things, but primarily watching over people. It was inherently physical, spatial and visual. In the mid-twentieth century, Foucault (1975/1977) revived the application of the idea to panoptic efficiency in prisons (Bentham 1791), and reinforced the visual metaphor.

There have been many extensions to the contexts of surveillance beyond the visual (e.g. Petersen 2012, Marklund & Skouvig 2021). Aural monitoring of sound, and particularly the human voice, has a long history. Both visual and aural observation can be conducted at distance with the aid of tools such as telescopic lenses and directional microphones. Retrospective surveillance began with hand-recording of the visual by textual descriptions and depictions, and of voice by textual transcriptions. Photographic means became available from the mid-19th century, and sound recording from the turn of the 20th century.

Beyond observation and recording in geo-space, services to transport written messages by 'mail' or 'post' date back over two millenia, with some countries having extensive and reliable public services from about 1800. The early emergence of 'communications surveillance' is evidenced by the use of simple substitution cyphers in the Roman Empire at the time of Christ. The telegraph gave rise to 'electronic surveillance' from as early as the 1840s, and extended to telephone wiretapping from the 1860s (Jepson 2014). That established a new form that we commonly refer to now as 'virtual space'. Recording of digital data passing through communications networks became possible by, at latest, the middle of the 20th century (Hosein & Palow 2013). The US National Security Agency has used the catchcry of "you need the haystack to find the needle" as a justification for its desire for mass surveillance of all electronic communication channels (Nakashima & Warrick 2013).

With the application of computing to administrative tasks commencing in 1952, it quickly became more efficient to observe not the people themselves but the data generated about them. This has become widely referred to as dataveillance (Clarke 1988). The data sources were initially byproducts of individuals' interactions with organisations, and were co-opted or expropriated to additional purposes. They were commonly also physically or virtually consolidated with data from other sources, frequently without consideration of the extent to which the sets of data were mutually compatible. In addition, new forms of data-gathering emerged for the specific purpose of monitoring behaviour. Organisations sought to reduce the unreliability of data consolidation activities by creating cross-indexes among different identification schemes, and imposing multi-use identifiers on people (Clarke 1994). They saved money by providing individuals with artefacts carrying pre-recorded identifying data, most commonly on plastic cards. They then enlisted individuals as data capture operators, through ATMs, then web-forms, then as willing providers of copious personal data donated by carrying and using hand-held devices. Automated data capture has also become mainstream, with motor vehicles, electricity meters, and various household appliances streaming personal data to corporations.

Auto-surveillance, that is to say of oneself by oneself, has been extended to 'wellness' devices and implanted chips for identification and for monitoring of, and even intervention relating to, physiological phenomena such as heart-rate. A related development, primarily since about 2000, is 'experience surveillance' (Clarke 2014). Almost all forms of searching for, gaining access to, and experiencing text, data, image and video, together with access to live events, migrated from mostly-anonymous analogue to mostly-identified digital forms during a remarkably short time from the emergence of the Web c.1993 and its subversion by Web 2.0 c.2005 (Clarke 2008, Fuchs 2011). By substituting services dependent on remote parties, the public was lulled into auto-surveillance of their interests, influences and intellectual associations.

These primarily technical perspectives on surveillance have been complemented by more circumspect notions such as 'surveillance society' ( (Gandy 1989, Lyon 2001), 'the panoptic sort' (Gandy, 1993, 2021), 'ubiquitous transparency' (Brin 1998), 'location and tracking' (Clarke 2001, Clarke & Wigan 2011), 'sousveillance' (from below rather than above, by the weak of the powerful - (Mann et al. 2003), 'equiveillance' (Mann 2005), 'uberveillance' (comprehensive and from within - Michael 2006, Michael & Michael 2007), 'surveillance capitalism' (Zuboff 2015) and the 'digital surveillance economy' (Clarke 2019a).

Given the diversity of the phenomena, technologies and interpretations, definitions need to be generic, broader than just the visual, across many kinds of space, e.g.

Surveillance is the systematic investigation or monitoring of the actions or communications of one person (personal surveillance) or multiple people (mass surveillance)

Similarly, a survey of the application to surveillance of a generic technology such as AI needs to commence with a comprehensive appreciation of the surveillance arena. One of the important considerations is the seven-fold set of basic questions.

Although the dominant policy concern is about the observation of one or more people, the question '(1) Of What?' may sometimes be answered in some other manner. The monitoring of a space, whether it be a physical space such as that in front of a door, in a room, or along a fence or wall, or a virtual space such as a bulletin-board or chat-forum. The focus may be as specific as a single piece of luggage, or the key-hole of a safe deposit box. A space has an address or coordinates of one kind or another, whereas a physical object (such as a motor vehicle or a mobile phone) or a virtual object (such as a data-file) is mobile within a space and surveillance involves identifying or authenticating the identity of the object, locating it, and tracking it.

This suggests the need for an even broader working definition, e.g.:

Surveillance is the systematic investigation or monitoring of the actions or communications of one person (personal surveillance) or multiple people (mass surveillance), or of spaces or objects particularly where the ultimate purpose is the investigation or monitoring of one or more people

The answer to '(2) For Whom?' may be the individual themselves, or a person who has an interest in the space or object being monitored. Alternatively, it may be an entity with a relationship with the person under surveillance, such as an employer, or a person in loco parentis. It may, however, be a third party, and perhaps one unknown to the individual(s) being monitored or whose object or space is under surveillance.

The question '(3) By Whom?' may point to the individual themselves (auto-surveillance'), as occurs with various forms of autobiographical logging, with a person's home, and with safeguards for valuable possessions such as artworks. Alternatively, the active party may be an associate (e.g. employer, fellow householder), or a third party, possibly unknown. In an era of intense specialisation of services, and ubiquitous outsourcing, a service-provider may be involved, or indeed a supply-chain of them. Such organisations may act not only as agents for the person, but also as principals, exploiting the resulting data for their own purposes.

The '(4) Why?' leads to a vast array of socially-positive motivations, including private security of self, family and assets, public security against threats as varied as natural disaster, over-crowding, riotous behaviour, preparation for assault, and assault. More ambiguous, or morally dubious, or actively anti-social motivations include monitoring for behaviour that does not conform with the values of the observer (whether conducted by the State, a corporation, an association of individuals seeking to enforce their own moral code on others, or individual vigilantes). Surveillance is commonly an enabler of action by some party against another party; however, surveillance that is overt, or suspected of being conducted, also has a deterrent effect on behaviour, or at least a displacement effect where the surveillance is thought to be localised rather than ubiquitous.

The '(5) How?' of surveillance is diverse, as discussed above, and the '(6) Where?' question is closely related to the considerations discussed in relation to '(1) Of What?'.

The question '(7) When?' is more challenging, because it requires analysis of multiple sub-questions:

Surveillance has been enabled and supported by a wide range of sciences and technologies, as diverse as optics and photography, acoustics and sound engineering, electronic engineering, computer science, cryptology, telecommunications, remote sensing, and biometrics. The remainder of this article considers the contributions to surveillance of the notion of 'artificial intelligence', and the loose cluster of technologies that variously have been, and currently are depicted as, falling within that field.


3. AI in Support of Surveillance

The previous section has identified key aspects of the complex, multi-facetted notion of surveillance. This section considers the notion, the technologies and the applications of artificial intelligence, and identifies current and potential applications to surveillance.

The term 'AI' is only one among a variety of vague notions have been common in the ICT field in recent years, such as ambient computing, 'blockchain', Internet of Things (IoT) and 'smart cities'. These tend to be rallying points for multiple technologies, vaguely linked to business functions and/or outcomes thought by targeted organisations to be desirable. This does not provide a suitable basis for analysis. The approach adopted here is instead to go back to the underlying technologies and techniques, and consider their known and potential applications to surveillance functions.

First, a brief overview is provided of the origins and (ambiguous and contested) notion of AI, and of some of the main fields within it. Those fields that appear to have particular relevance to surveillance are then outlined. That provides a basis for identifying the (predictable) disbenefits and (possible) risks that AI applications to surveillance appear to embody.

3.1 AI in the Abstract

The term Artificial Intelligence (AI) was coined in the mid-20th century, based on "the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it" (McCarthy et al. 1955). Histories of AI (e.g. Russell & Norvig 2009, pp. 16-28, Boden 2016, pp.1-20) identify multiple strands and multiple re-visits to much the same territory.

Conventionally (Albus 1991, Russell & Norvig 2003, McCarthy 2007):

Intelligence is exhibited by an artefact if it (1) evidences perception and cognition of relevant aspects of its environment, (2) has goals, and (3) formulates actions towards the achievement of those goals

The word 'artificial' implies 'artefactual' or 'human-made'. Its conjunction with 'intelligence' leaves open the question as to whether the yardstick is 'equivalent to human', 'different from human' or 'superior to human'. Over-enthusiastic promotion has always been a feature of ther AI arena. The revered Herbert Simon averred that "Within the very near future - much less than twenty-five years - we shall have the technical capability of substituting machines for any and all human functions in organisations. ... it would be surprising if it were not accomplished within the next decade" Simon (1960). Unperturbed by ongoing failures, he kept repeating such predictions throughout the following decades. His mantle was inherited: "by the end of the 2020s" computers will have "intelligence indistinguishable to biological humans" (Kurzweil 2005, p.25). The exaggerations of AI's proponents, repeated many times over, have resulted in under-delivery, and a cyclical 'boom and bust' pattern.recurrent difficulties and existential issues

The notion of AI, spurred on by the hype associated with it and the research funding that proponents' promises have extracted, has excited activity in a great many fields. Some of significance, but of less direct relevance to surveillance, include natural language understanding, image processing and manipulation, artificial life, evolutionary computation / genetic algorithms, and artificial emotional intelligence.

Robotics is defined in various ways, involving two key elements:

Two further frequently-mentioned elements, which are implied atttributes rather than qualifying criteria, are sensors, to enable the gathering of data about the devices's environment; and flexibility, in that it can both operate using a range of programs, and manipulate and transport materials in a variety of ways.

AI intersects with robotics, to the extent that the software installed in it is jutifiably regarded as artificially intelligent, as defined above. This will be further discussed in the following sections. Where robotics does incorporate AI elements, the disbenefits and risks sharpen enormously, because of the inherent capacity of a robot to act autonomously in the real world (Clarke 2014b), and the temptation and tendency for the power of decision and action to be delegated to the artefact, whether intentionally or merely by accident.

3.2 Apparently Relevant Fields of AI

Several fields of AI have fairly obvious potential for application to surveillance. In broad terms, these involve 'pattern recognition', for which four major components are needed: "data acquisition and collection, feature extraction and representation, similarity detection and pattern classifier design, and performance evaluation" (Rosenfeld & Wechsler 2000, p.101).

There are several distinct contexts to which pattern recognition can be applied. These five are directly relevant:

The last of these requires closer attention. The common feature of the classical approaches to pattern-recognition in data has been that an algorithm is used to draw inferences from data that is posited to be a sufficiently close representation of some real world phenomenon. An algorithm is a procedure, or set of steps. The steps may be serial and invariant. Alternatively, and more commonly, the steps may also include repetition constructs (e.g. 'perform the following steps until a condition is fulfilled') and selection constructs (e.g. 'perform one of the following sets of steps depending on some test').

During the period from about 1965 until some time early this century, the preparation of computing software has been dominated by the use of languages that were designed to enable the convenient expression of algorithms, sometimes referred to as procedural or imperative languages. Software developed in this manner represents a humanly-understandable solution to a problem, and hence the rationale underlying an inference drawn using it can be readily expressed.

Other approaches to developing software exist (Clarke 1991). Two that are represented as being AI techniques are highly relevant to the issues addressed in the present analysis. The approach adopted in rule-based 'expert systems' is to express a set of rules that apply within a problem-domain. A classic rule-example is:

If <Person> was born within the UK or any of <list of UK Colonies> between <date = 01.01.1949> and <date = 31.12.1982>, they qualify as <a Citizen of the United Kingdom and Colonies (CUKC)> and hence qualify for a UK passport

When software is developed at this level of abstraction, a model of the problem-domain exists; but there is no explicit statement of a particular problem or a solution to it. In a simple case, the reasoning underlying an inference that is drawn in a particular circumstance may be easy to provide, e.g. to an executive, a judge, or an aggrieved person upset about a decision made based on that inference. In cases where data is missing and/or where the rulebase is large or complex and/or where the rulebase features intertwining among rules, discretions, and even indeterminacy, an explanation may not be feasible.

The other important software development approach is (generically) machine learning (often referred to as AI/ML), and (more specifically) connectionist networks or artificial neural networks (ANNs). ANNs originated in the 1940s in the cognitive sciences, prior to the conception of AI (Medler 1998). They have subsequently been co-opted by AI researchers and are treated as an AI technique. The essence of neural network approaches is that tools, which were probably developed using a procedural language, are used to process examples taken from some problem-domain. Such examples might comprise the data relevant to 5% of all applicants for UK passports during some time-period who were born in Jamaica, including the result of the application.

The processing results in a set of weights on the factors the the tool was told, or the tool inferred, were involved in drawing the inference. Although the tool may have been developed using a procedural language, the resulting software that is used to process future cases is appropriately referred to as being empirical. (The industry misleadingly refers to it as being algorithmic, and critics have adopted that in terms such as 'algorithmic bias'; but the actual processing is empirically-based, not algorithmic).

A critical feature of ANNs is that the process is a-rational, that is to say that there is no reasoning underlying the inference that is drawn, and no means of generating an explanation any more meaningful than 'based on the data, and the current weightings in the software, you don't qualify'.

The approach is referred to as 'machine learning' partly because the means whereby the weightings are generated depends on the collection of prior cases that are fed to the tool as its 'learning set'. Hence, in the (to many people, somewhat strange) sense of the term used in this field, the software 'learns' the set of weightings. In addition, the system may be arranged so as to further adapt its weightings (i.e. 'learn'), based on subsequent cases.

There are two different patterns whereby the factors and weightings can come about (DeLua 2021). The description above was of 'supervised learning', in that the factors were fed to the tool by a supervisor ('labelled' or 'tagged'), and in each case the 'right answer' was provided within the data. In the case of 'unsupervised learning', on the other hand, there are no labels, and 'right answers' are not provided with the rest of the data. The tool uses clusterings and associations to create the equivalent of what a human thinker would call constructs, but without any contextual information about the real world that the data purports to relate to.

On the one hand, 'unsupervised learning' is touted as being capable of discovering patterns and relationships that were not previously known; but, on the other, this greatly exacerbates the enormous scope that already exists with 'supervised learning' for inferences to be drawn that bear little relationship to the real world. When this is coupled with the a-rationality of all AI/ML, and its inherent feature of drawing inferences that are mysterious and inexplicable, it is unsurprising that people who are not AI enthusiasts are perturbed and even revulsed by the use of ANNs to make decisions that materially affect people.

3.3 Current and Potential Applications to Surveillance

TEXT

pattern-matching through facial recognition for authentication of identity assertions, and for identification of individuals presumed to be within populations for which biometric measures have already been recorded

pattern-matching of ('big') data, through a variety of techniques many of which pre-date the emergence of AI but some of which have emerged within the AI field

pre-AI, genuinely 'algorithmic' inferencing, with its ...

rule-based 'expert systems', with generalised assertions of truth in particular contexts applied to contexts that may or may not be equivalent in the ways that transpire to be relevant

many sources of error in all forms of inferencing, including low-quality data, low-quality scrubbing of data, inappropriate application of inferencing techniques, lack of real-world testing of the appropriateness of inferences, ...

End up with AI/ML inferencing but get rid of 'algorithmic' in favour of 'empirical'

Stress fuzziness and consequential error-rates

combined with arationality and inexplicability and hence the sequence of unaccountability etc.


4. Regulatory Alternatives

A review is then undertaken of the ways in which control might be exercised over AI applications to surveillance, by applying the regulatory framework in Clarke (2021).

THIS NEEDS TO IDENTIFY PRINCIPLES PLUS THE OTHER ELEMENTS

It is noted, however, that recent decades have seen ongoing reductions in regulatory commitment in many jurisdictions and in many contexts. In the private sector, factors include the de-regulation and 'better regulation' movements to ratchet back existing controls, the outsourcing of both activities and responsibilities, including the use of low-regulation havens, and jurisdictional arbitrage. In the public sector, key aspects include the drift from subcontracting, to comprehensive outsourcing, to public-private partnerships, and on towards the corporatised state. A particular feature that appears to have largely 'flown under the radar' to date is the conversion of software products to services, resulting in AI as a Service (AIaaS).

(That greatly affects how much understanding of the AI tools, and control over the AI tools, the organisation has. It's complicated, and there's a huge amount of ignorance out there).


5. The Principles for Responsible AI' Movement

A summary is provided of the wave of publishing activity during the period 2015-21 in the area of 'Principles for Responsible AI', including both trivial public relations documents from ICT corporations and serious-minded proposals from policy agencies and public interest advocates. The analysis draws on a consolidated super-set of Principles published in Clarke (2019), and several evaluations that have been undertaken of particular proposals against that standard. The applicability of the super-set to AI applications to surveillance is considered, and the scope for articulating a more specific suite of requirements is investigated.

SOMEWHERE:

An example of how weak government guidelines are exploited by business enterprises is provided by the Australian experience. The Department of Industry produced a set of 'AI Ethics Principles' in late 2019 (DI 2019). Within 18 months, the two largest banks and the privatised PTT, Telstra, had adapted the Department's work into internal policies (Hendry 2021). This was despite the extremely weak score that the Department of Industry's document scored against the 50 Principles - in the range 26-40% (Clarke 2019), and the complete absence of any regulatory framework.

]]

The European Commission's document (EC 2019) stood alone as official guidance with moderately good coverage. It is noteworthy, however, that it still only achieve a score of 74% when assessed, and liberally scored, against the consolidated set of 50 Principles.

SOMEWHERE, THE OTHER ELEMENTS BEYOND THE PRINCIPLES NEED TO BE ADDRESSED


6. Developments in the EU 2020-21

All of the publications to date have been 'Guidelines', lacking enforceability, and in most cases having little or no volitional power that might materially influence AI practice. As noted above, the only contribution of substantive consequence up to DATE was EU (2019). However, it still failed to embody any enforcement mechanisms.

That changed with the European Commission's proposals, published 2 years later, on 21 April 2021 (EC 2021). These appear to be a world-first initiative to establish formal regulation of a bespoke nature for AI applications.

This section provides a critique of that document against the consolidated 50 Principles. It is of course a 'living document', subject to the impact of pleadings from a diverse array of interest groups, some with substantial power, and to the impact of bureaucratic and political forces. The analysis reflects information available to 30 June 2021. At that stage, it was still with the Council of the EU (EC 2021b). It was at that stage designated the Artificial Intelligence Act, with the responsible committee of the European Parliament designated as the Committee on the Internal Market and Consumer Protection.

OVERALL DESCRIPTION

DESCRIPTION OF THE ANALYSIS PROCESS

REFERENCE TO THE WORKING PAPER

THE RESULTS


7. Conclusions

Data analytics generally aren't under control, even before AI (e.g. RoboDebt, Dutch govt disaster)

There are signs of alarm about the damage AI can do (Clarke 2019, Crawford 2021)

There are signs of fear among providers and user organisations that public distrust will limit their ability to use AI tools (voluminous, platitudinous 'ethical principles')

There's little sign of any actual regulatory activity

The first proposal, from the EC, is appallingly poor

The article concludes with a sober assessment of the prospects of effective control being achieved over AI applications to surveillance even by organisations with limited market and institutional power, let alone by large corporations and government agencies, and in the (flexibly-defined) area of 'national security'.


Reference List

Albus J. S. (1991) 'Outline for a theory of intelligence' IEEE Trans. Systems, Man and Cybernetics 21, 3 (1991) 473-509, at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.410.9719&rep=rep1&type=pdf

Bentham J. (1791) 'Panopticon; or, the Inspection House', London, 1791

Boden M. (2016) 'AI: Its Nature and Future' Oxford University Press, 2016

Brin D. (1998) 'The Transparent Society' Addison-Wesley, 1998

Clarke R. (1988) 'Information Technology and Dataveillance' Commun. ACM 31,5 (May 1988) 498-512, prePrint at http://www.rogerclarke.com/DV/CACM88.html

Clarke R. (1991) 'A Contingency Approach to the Application Software Generations' Database 22, 3 (Summer 1991) 23-34, PrePrint at http://www.rogerclarke.com/SOS/SwareGenns.html

Clarke R. (1994) 'Human Identification in Information Systems: Management Challenges and Public Policy Issues' Information Technology & People 7,4 (December 1994) 6-37, at http://www.rogerclarke.com/DV/HumanID.html

Clarke R. (2001) 'Person-Location and Person-Tracking: Technologies, Risks and Policy Implications' Information Technology & People 14, 2 (Summer 2001) 206-231, PrePrint at http://www.rogerclarke.com/DV/PLT.html

Clarke R. (2008) 'Web 2.0 as Syndication' Journal of Theoretical and Applied Electronic Commerce Research 3,2 (August 2008) 30-43, PrePrint at http://www.rogerclarke.com/EC/Web2C.html

Clarke R. (2009) 'Framework for Surveillance Analysis' Xamax Consultancy Pty Ltd, August 2009, at http://www.rogerclarke.com/DV/FSA.html

Clarke R. (2014) 'Privacy and Social Media: An Analytical Framework' 'Journal of Law, Information and Science 23,1 (April 2014) 1-23, PrePrint at http://www.rogerclarke.com/DV/SMTD.html

Clarke R. (2014b) 'What Drones Inherit from Their Ancestors' Computer Law & Security Review 30, 3 (June 2014) 247-262, PrePrint at http://www.rogerclarke.com/SOS/Drones-I.html

Clarke R. (2019) 'Principles and Business Processes for Responsible AI' Computer Law & Security Review 35, 4 (2019) 410-422, PrePrint at http://www.rogerclarke.com/EC/AIP.html

Clarke R. (2019a) 'Risks Inherent in the Digital Surveillance Economy: A Research Agenda' Journal of Information Technology 34,1 (Mar 2019) 59-80, PrePrint at http://www.rogerclarke.com/EC/DSE.html

Clarke R. (2019) 'The Australian Department of Industry's 'AI Ethics Principles' of September / November 2019: Evaluation against a Consolidated Set of 50 Principles' Xamax Consultancy Pty Ltd, November 2019, at http://www.rogerclarke.com/EC/AI-Aust19.html

Clarke R. (2021) 'A Comprehensive Framework for Regulatory Regimes as a Basis for Effective Privacy Protection' Proc. 14th Computers, Privacy and Data Protection Conference (CPDP'21), Brussels, 27-29 January 2021, PrePrint at http://rogerclarke.com/DV/RMPP.html

Clarke R. & Wigan M. (2011) 'You Are Where You've Been: The Privacy Implications of Location and Tracking Technologies' Journal of Location Based Services 5, 3-4 (December 2011) 138-155, PrePrint at http://www.rogerclarke.com/DV/YAWYB-CWP.html

Cole S.A. (2004) 'History of Fingerprint Pattern Recognition' Ch.1, pp 1-25, in Ratha N. & Bolle R. (eds.) 'Automatic Fingerprint Recognition Systems', SpringerLink, 2004

Daugman J. (1998) 'History and Development of Iris Recognition', at http://www.cl.cam.ac.uk/users/jgd1000/history.html

DeLua J. (2021) 'Supervised vs. Unsupervised Learning: What's the Difference?' IBM, 12 March 2021, at https://www.ibm.com/cloud/blog/supervised-vs-unsupervised-learning

DI (2019) 'AI Ethics Principles' Department of Industry, Innovation & Science, 2 September 2019, at https://www.industry.gov.au/data-and-publications/building-australias-artificial-intelligence-capability/ai-ethics-framework/ai-ethics-principles

EC (2019) 'Ethics Guidelines for Trustworthy AI' High-Level Expert Group on Artificial Intelligence, European Commission, April 2019, at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=58477

EC (2021) 'Proposal for a Regulation on a European approach for Artificial Intelligence' European Commission, 21 April 2021, at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=75788

EC (2021b) ' Document 52021PC0206 ' European Commission, viewed 14 July 2021, at https://eur-lex.europa.eu/legal-content/EN/HIS/?uri=COM:2021:206:FIN

Foucault M. (1977) 'Discipline and Punish: The Birth of the Prison' Peregrine, London, 1975, trans. 1977

Fuchs C. (2011) 'Web 2.0, Prosumption, and Surveillance' Surveillance & Society 8,3 (2011) 288-309, at https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/download/4165/4167

Gandy O.H. (1989) 'The Surveillance Society: Information Technology and Bureaucratic Social Control' Journal of Communication 39, 3 (Summer 1989), at https://www.dhi.ac.uk/san/waysofbeing/data/data-crone-gandy-1989.pdf

Gandy O.H. (1993) 'The Panoptic Sort: Critical Studies in Communication and in the Cultural Industries' Westview, Boulder CO, 1993

Gandy O.H. (2021) 'The Panoptic Sort: A Political Economy of Personal Information' Oxford University Press, 2021

Gose E., Johnsonbaugh R. & Jost S. (1996) 'Pattern recognition and image analysis' Prentice Hall, 1996

Hendry J. (2021) 'Telstra creates standards to govern AI buying, use' itNews, 15 July 2021, at https://www.itnews.com.au/news/telstra-creates-standards-to-govern-ai-buying-use-567005

Hosein G. & Palow C.W. (2013) 'Modern Safeguards for Modern Surveillance: An Analysis of Innovations in Communications Surveillance Techniques' Ohio State L.J. 74, 6 (2013) 1071-1104, at https://kb.osu.edu/bitstream/handle/1811/71608/OSLJ_V74N6_1071.pdf?sequence=1

Indurkhya N. & Damerau F.J. (eds.) (2010) 'Handbook of natural language processing' CRC Press, 2010

Jepson T. (2104) 'Reversing the Whispering Gallery of Dionysius: A Short History of Electronic Surveillance in the U.S.' Technology's Stories 2,1 (April 2014), at https://www.technologystories.org/2014/04/

Kurzweil R. (2005) 'The Singularity is Near' Viking Books, 2005

Lyon D. (2001) 'Surveillance Society: Monitoring in Everyday Life' Open University Press, 2001

McCarthy J. (2007) 'What is artificial intelligence?' Department of Computer Science, Stanford University, November 2007, at http://www-formal.stanford.edu/jmc/whatisai/node1.html

McCarthy J., Minsky M.L., Rochester N. & Shannon C.E. (1955) 'A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence' Reprinted in AI Magazine 27, 4 (2006), at https://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1904/1802

Mann S. (2005) 'Equiveillance: The equilibrium between Sur-veillance and Sous-veillance' Opening Address, Computers, Freedom and Privacy, 2005, at http://wearcam.org/anonequity.htm

Mann S., Nolan J. & Wellman B. (2003) ' Sousveillance: Inventing and Using Wearable Computing Devices for Data Collection in Surveillance Environments' Surveillance & Society 1, 3 (2003) 331-355, at https://ojs.library.queensu.ca/index.php/surveillance-and-society/article/view/3344/3306

Marklund A. & Skouvig L. (eds.) (2021) 'Histories of Surveillance from Antiquity to the Digital Era: The Eyes and Ears of Power' Routledge, 2021

Medler D.A. (1998) 'A Brief History of Connectionism' Neural Computing Surveys 1, 2 (1998) 18-72, at https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.86.7504&rep=rep1&type=pdf

Michael M.G. (2006) 'Consequences of Innovation' Unpublished Lecture Notes No. 13 for IACT405/905 - Information Technology and Innovation, School of Information Technology and Computer Science, University of Wollongong, Australia, 2006

Michael M.G. & Michael K. (2007) 'A Note on Uberveillance' Chapter in Michael K. & Michael M.G. (2007), at https://works.bepress.com/kmichael/48/https://works.bepress.com/kmichael/48/

Nakashima N. & Warrick J. (2013) 'For NSA chief, terrorist threat drives passion to 'collect it all'' The Washington Post, 14 July 2013, at https://www.washingtonpost.com/world/national-security/for-nsa-chief-terrorist-threat-drives-passion-to-collect-it-all/2013/07/14/3d26ef80-ea49-11e2-a301-ea5a8116d211_story.html

O'Shaughnessy D. (2008) 'Automatic speech recognition: History, methods and challenges' Pattern Recognition 41,10 (2008) 2965-2979

Pal S.K. & Mitra P. (2004) 'Pattern Recognition Algorithms for Data Mining' Chapman & Hall, 2004

Petersen J.K. (2012) 'Handbook of Surveillance Technologies' Taylor & Francis, 3rd Edition, 2012

Rosenfeld A. & Wechsler H. (2000) 'Pattern Recognition: Historical Perspective and Future Directions' Int. J. Imaging Syst. Technol. 11 (2000) 101-116, at http://xrm.phys.northwestern.edu/research/pdf_papers/2000/rosenfeld_ijist_2000.pdf

Russell S.J. & Norvig P. (2009) 'Artificial Intelligence: A Modern Approach' Prentice Hall, 3rd edition, 2009

Ryan A., Cohn J., Lucey S., Saragih J., Lucey P., la Torre F.D. & Rossi A. (2009) 'Automated Facial Expression Recognition System' Proc. Int'l Carnahan Conf. on Security Technology, 2009, pp.172-177, at https://www.researchgate.net/profile/Jeffrey-Cohn/publication/224082157_Automated_Facial_Expression_Recognition_System/links/02e7e525c3cf489da1000000/Automated-Facial-Expression-Recognition-System.pdf

Simon H.A. (1960) 'The Shape of Automation' reprinted in various forms, 1960, 1965, quoted in Weizenbaum J. (1976), pp. 244-245

Zuboff S. (2015) 'Big other: Surveillance capitalism and the prospects of an information civilization' Journal of Information Technology 30, 1 (2015) 75-89, at https://cryptome.org/2015/07/big-other.pdf


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor associated with the Allens Hub for Technology, Law and Innovation in UNSW Law, and a Visiting Professor in the Research School of Computer Science at the Australian National University.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 29 April 2021 - Last Amended: 7 September 2021 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/DV/AIP-S.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2021   -    Privacy Policy