Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Drones' Inheritance'

What Drones Inherit from Their Ancestors

Final Version of 26 February 2014

Published as Computer Law & Security Review 30, 3 (June 2014) 247-262

Roger Clarke **

(c) Xamax Consultancy Pty Ltd, 2013-14

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/SOS/Drones-I.html

This is the second in a series of papers on drones: 1, 2, 3, 4

The previous version is at http://www.rogerclarke.com/SOS/Drones-I-131214.html


Abstract

A drone is a flying computer. It is dependent on local data communications from its onboard sensors and to its onboard effectors, and on telecommunications links over which it receives data-feeds and command-feeds from terrestrial and perhaps airborne sources and from satellites. A drone acts on the world, and is therefore a robot. The remote pilot, and the operators of drone facilities such as cameras, depend on high-tech tools that interpret data, that display transmitted, enhanced and generated image and video, and that enable the composition of commands. So drone operators are already cyborgs. Computing, data communications, robotics and cyborgisation offer power and possibilities, but with them come disbenefits and risks. A review of the critical literature on these topics, and on surveillance, identifies a considerable array of issues of relevance to the design and deployment of drones.


Contents


1. Introduction

Any specific technology derives attributes from the generic technologies of which it is an instance. A drone is a flying computer. It is dependent on local data communications from its onboard sensors and to its onboard effectors, and on telecommunications links over which it receives data-feeds and command-feeds from terrestrial and perhaps airborne sources and from satellites. A drone acts on the world, and is therefore a robot. The remote pilot, and the operators of drone facilities such as cameras, depend on high-tech tools that interpret data, that display transmitted, enhanced and generated image and video, and that enable the composition of commands. So drone operators are already cyborgs. Many drones carry cameras and are used for surveillance. Computing, data communications, robotics, cyborgisation and surveillance offer power and possibilities, but with them come disbenefits and risks. Critical literatures exist in relation to all of those areas. An inspection of those literatures should provide insights into the limitations of drones, and the impacts and implications arising from their use.

This is the second in a series of four papers that together identify the nature of drones, the disbenefits and risks arising from their use, and the extent to which existing regulatory arrangements are adequate. The first paper focussed on the attributes of drones, distinguishing those that are definitional, and using descriptions of particular applications to reveal the issues that arise in particular contexts. The third and fourth papers in the series summarise the challenges to public safety and to behavioural privacy respectively, examine the extent to which current regulatory frameworks appear likely to cope, and assess the prospects of adapted and new measures to address the problems that drones present.

The present paper completes the foundations for the regulatory analysis, by reviewing existing, critical literatures, in order to ensure that the accumulated understanding of relevant technologies is brought to bear on the assessment of drone technologies . One context of relevance is where drones autonomously perform actions, or take decisions. Much more commonly, human or organisational actions or decisions may place strong reliance on drones performing as they are intended. Of particular concern are circumstances in which a human or organisation generally performs the action indicated by the device, or adopts the decision largely automatically, with little reflection. Issues also arise where drones are the dominant source of data used by human decision-makers.

The paper commences by reviewing the critical literature on computing. This places particular emphasis on the structuredness of decisions that are made by computers, or that are made in ways that place considerable reliance on computers. Aspects of data communications are then considered, and insecurities arising from the use of information technology identified. Issues arising from robotics are canvassed, including both basic and extended conceptions of a robot. This throws the problems of drone autonomy into sharp relief. Attention is then switched to remote pilots and facilities operators and the capabilities on which they depend, which is the subject of a critical literature on cyborgism. Finally, the surveillance literature is considered, in order to provide a basis for analysis of the impact of surveillance applications on behavioural privacy.


2. Computing

Computing is inherent within drones. It is necessary to support signal processing in order to convert incoming messages into a usable form, for data processing in order to analyse both that data and data coming from its onboard sensors, and for transmitting commands that are computed, and commands that are received from the remote pilot and facilities operators, to its flight-control apparatus, and other onboard devices such as cameras and load-handling capabilities.

A considerable literature exists that identifies features of computers and computing that result in limitations to their applicability. Particularly significant references include Dreyfus (1972/1992) on 'what computers can't do', Weizenbaum (1976) on 'computer power and human reason', and Dreyfus & Dreyfus (1986) on 'human intuition and expertise in the era of the computer'.

There are clearly very large numbers of tasks for which computers have proven not merely enormously faster than humans, but also highly accurate and reliable. How can the areas of weakness of computers be delineated, in order to reconcile scepticism against such positive experiences? This section uses the concept of the structuredness of decisions to address that question.

2.1 The Structuredness of Decisions

Computers can be very successfully applied to tasks that are `structured' in the sense that an algorithm can be, and has been, expressed. In contexts in which decisions are 'unstructured' or at best 'semi-structured', computers can be used as aids, but their use as though they were reliable decision-makers is undermined by a number of fundamental problems.

During the second half of the twentieth century, a school of thought arose, which asserted that all decisions were structured, or that unstructured decisions were merely those for which a relevant structured solution had not yet been produced. The foundations of these ideas are commonly associated with Herbert Simon, who declared that "there are now in the world machines that think, that learn and that create" (a statement that dates to 1958, but see Newell & Simon 1972). This delusion of self, and of US research funding organisations, went further: "Within the very near future - much less than twenty-five years - we shall have the technical capability of substituting machines for any and all human functions in organisations. ... Duplicating the problem-solving and information-handling capabilities of the brain is not far off; it would be surprising if it were not accomplished within the next decade" (Simon 1960).

Over 35 years later, with his predictions abundantly demonstrated as being fanciful, Simon nonetheless maintained his position, e.g. "the hypothesis is that a physical symbol system [of a particular kind] has the necessary and sufficient means for general intelligent action" (Simon 1996, p. 23 - but expressed in similar terms from the late 1950s, in 1969, and through the 1970s), and "Human beings, viewed as behaving systems, are quite simple" (p. 53). Simon acknowledged "the ambiguity and conflict of goals in societal planning" (p. 140), but his subsequent analysis of complexity (pp. 169-216) considered only a very limited sub-set of the relevant dimensions.

Further, Simon wrote that "The success of planning [on a societal scale] may call for modesty and constraint in setting the design objectives ..." (p. 140). This was a declaration that the problem lies not in the modelling capability, but rather in the intransigence of reality, which therefore needs to be simplified. What is usefully dubbed the 'Simple Simon' proposition is that any highly complex system is capable of being reduced to a computable model, such that all decisions about the system can be satisfactorily resolved by computations based on that model.

Akin to this mechanistic view of the world were attempts by the more boisterous proponents of cybernetic theory to apply it to the management of economies and societies (Beer 1973). The only large-scale experiment that appears to have ever been conducted, in Chile in 1971-73, was curtailed by the undermining of the economy, and violent overthrow of the Allende government by General Pinochet (e.g. Medina 2006). As a result, the world still lacks empirical evidence to inform judgements about whether a well-structured cybernetic model of an economy and society can be devised, and whether and how technical decisions can be delegated to machines while value-judgements are left to humans.

Rejecting Simon's propositions, Dreyfus & Dreyfus (1986) argued that what this section calls 'unstructured decision contexts' involve "a potentially unlimited number of possibly relevant facts and features, and the ways those elements interrelate and determine other events is unclear" (p. 20), and "the goal, what information is relevant, and the effects of ... decisions are unclear. Interpretation ... determines what is seen as important in a situation. That interpretive ability constitutes 'judgement'" (pp. 35-36).

A valuable clarification of the boundaries between structured and unstructured was provided by Miller & Starr (1967), whose 'decision analysis' techniques showed that rational, management science approaches can be reasonably applied in contexts characterised by risk, and even by uncertainty, and perhaps even by conflict or competition. However, this is only the case where all of the following conditions are fulfilled:

Where those conditions cannot be satisfied, the decision needs to be characterised as unstructured. A vast array of decisions fall outside the scope of Miller & Starr's decision analysis theory.

Despite the warning signals, the 'Simple Simon' tradition was carried forward with enthusiasm into what has been dubbed in retrospect the 'hard AI' movement, associated with MIT, and in particular Marvin Minsky and Seymour Papert. It is also evident in the less grounded among Ray Kurzweil's works, which repeated a familiar prediction - that "by the end of the 2020s" computers will have "intelligence indistinguishable to biological humans" (Kurzweil 2005, p.25). The original use of `singularity' was as a figure of speech by von Neumann in 1950: "The ever-accelerating progress of technology ... gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue" (attributed to von Neumann, in Ulam 1958, p.5). Vinge (1993) and Moravec (2000) stretched the notion dramatically, and elevated it to an inevitability, complete with an arrival date.

A populist rendition of the `hard AI' thesis was McCorduck's 'Machines Who Think' (1979). At its most extreme, that school's focus on a computational model of human intelligence involved a complete rejection not only of holism, but even of systems thinking. Such classics as Feigenbaum & McCorduck (1983) comprehensively demonstrated that the movement was built on faulty assumptions. Hard AI's grand ambitions collapsed, although many of its simplistic notions continually re-surface in computer science grant applications.

Yet the contrary position had been voiced in the same year as Simon's first expression of his assertion: "The outward forms of our mathematics are not absolutely relevant from the point of view of evaluating what the mathematical or logical language truly used by the central nervous system is" (von Neumann, 1958, p. 82). The originator of the 'stored-program' architecture that has completely dominated computing during its first 70 years was under no illusions about the synthetic nature of computers, and the limited scope of that which is expressable in computer-based processes. Similarly, the Hard AI school was criticised from within MIT: "The symbolic terms of a theory can never be finally grounded in reality. ... Rather, the meanings of the terms depend on the whole body of theory. A theory is not a map, but a guide for intelligent search" (Weizenbaum 1976, pp. 140, 142).

Another perspective that provides insight is the 'universal grammar' notion, traceable from Roger Bacon in the 13th century to Noam Chomsky and beyond. Linguistic studies have underlined the enormous diversity of natural grammars, linked with the enormously diverse expressiveness of natural languages. Computers have a specific and fixed underlying grammar, and a vocabulary that comprises a non-extensible instruction-set designed under particular circumstances and for particular purposes. For example, computer architecture is digital and in almost all cases specifically binary, and hence continuous distributions and even intermediate points have to be simulated rather than directly represented.

"Are all the decisionmaking processes that humans employ reducible to 'effective procedures' [in the sense of being Turing-complete] and hence amenable to machine computation?" asked Weizenbaum (1976, p. 67). Another way of expressing it is that many of the critical problems that humans address are afflicted with indeterminacy, and "indeterminate needs and goals and the experience of gratification which guides their determination cannot be simulated on a digital machine whose only mode of existence is a series of determinate states" (Dreyfus 1992 p.282, but originally in Dreyfus, 1972, p. 194)

Yet another depiction is that intelligent activities are of four kinds, and only 'associationistic' and 'simple-formal' activities are capable of being fully performed by computers, whereas 'complex-formal' activities depend on heuristics, and 'nonformal' activities are not "amenable to digital computer simulation". The last category includes ill-defined games (e.g. riddles), open-structured problems (dependent on insight), natural language translation (which requires understanding within the context of use), and the recognition of varied and distorted patterns (which requires the inference of a generic pattern, or fuzzy comparison against a paradigmatic instance) (Dreyfus 1992, pp. 291-6, but originally in Dreyfus 1972, pp. 203-9).

The essential incapacity of computer models to reflect the many indeterminacies of human behaviour is reflected in the still-running debates about whether 'emotional intelligence' can be designed into computer-based systems. The notion of 'emotional intelligence' refers to the capacity to recognise and manage people's emotions. It emerged from the 1960s to the 1980s, and was popularised by Goleman (1996). It has been complemented by notions at the levels of society and politics, such as 'cultural intelligence'. Inevitably, remnant 'hard AI' schools have sought to develop 'artificial emotional intelligence'. The interim conclusion is that " ... There is no general consensus on the computational understanding of even basic emotions like joy or fear, and in this situation, higher human emotions ... inevitably escape attention in the fields of artificial intelligence and cognitive modeling" (Samsonovich 2012).

As the focal point for the criticism of reductionism, this work selected Herbert Simon - because his works in this area have garnered tens of thousands of citations, and continue to accumulate thousands more each year - and the mid-20th-century AI movement - because its abject failure demonstrated how grossly misleading Simon's propositions were. However, Dreyfus & Dreyfus (1986) suggest that the origins of the malaise may go back as far as a mis-translation into Latin of Aristotle's use of the Greek 'logos', which incorporated both judgement and logical thought, into the Latin 'ratio', meaning reckoning, leading to "the degeneration of reason into calculation" (p. 203).

In reaction against the reductionism of decision systems, decision support systems emerged. These effectively adopt the position that what human decision-makers need is not artificial, humanlike intelligence (which is already available in great quantity), but rather an alternative form of intelligence that humans exhibit far less, and that can be usefully referred to as 'complementary intelligence' (Clarke 1989): "Surely man and machine are natural complements: They assist one another" (Wyndham 1932). Together, the collaborative whole would be, in the words of Bolter (1986, p. 238) 'synthetic intelligence'.

To function as a decision support system, however, software must produce information useful to human decision-makers (such as analyses of the apparent sensitivity of output variables to input variables). Alternatively, a decision support system might offer recommended actions, together with explanations of the rationale underlying the recommendations. But is this feasible?

2.2 The Rationale Underlying a Decision

Discussion of 'structured' versus 'unstructured' decisions is complicated by the need to encompass multiple alternative approaches to computer programming, distinguished in Clarke (1991) as 'generations'. What were characterised as 3rd generation languages enabled the expression of algorithms or procedures, which explicitly defined a problem-solution - instructing the machine how to do something. The 4th generation of declarative languages instead enabled software developers to express the problem rather than the problem-solution, and to delegate the solution to pre-defined generic code.

The 5th generation, which can be characterised as 'descriptive', and is associated with the expert systems movement, was idealised as the expression of domain-specialists' knowledge about a particular problem-domain in some formalism (commonly as a set of rules and domain-specific data), suitable for processing by a pre-defined 'inference engine' (Dreyfus & Dreyfus 1986, Clarke 1989).

The 6th generation, which might be thought of as 'facilitative', is associated with artificial neural networks. This involves a model based on an aspect of human neuronal inter-connections, with weightings computed for each connection (SEP 2010). The weightings are developed and adapted by software, based on data provided to the software by the 'programmer'. This results in implicit models of problem-domains, which are based on data rather than on human-performed logical design.

Whereas all generations up to the 4th involved an explicit problem and problem-solution, the later generations do not. In the 5th, a problem-domain is explicitly defined, but the problem is only implicit, and explanation of the 'solution' or the rationale underlying the decision, may not be possible. In the 6th generation, even the domain model is implicit and example-based, and the closest to an explanation or rationale that is possible is a statement about the weightings that the software applied to each factor. The weightings, in turn, have been inferred to be relevant by the application of obscure rules to whatever empirical base had been provided to the machine.

Turing (1950) includes a quotation from Hartree (1949), which attributes an expression of the problem to the very first programmer - who was also arguably the first commentator on the limitations of computing - Ada Lovelace, in 1842: "The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform" (italics in the original). This clearly applies to the 1st to 4th generations, but is more challenging to apply to the 5th generation. It no longer applies to the 6th generation, because the level of abstraction of software development has ascended all the way to inanimate data, and there simply are no 'orders to perform' some logical action. At that stage, the decision, and perhaps action, has been delegated to a machine, and the machine's rationale is inscrutable.

Even with 3rd generation software, the complexity of the explicit problem-definition and solution-statement can be such that the provision of an explanation to, for example, corporate executives, can be very challenging. The 4th, 5th and 6th generations involve successively more substantial abandonment of human intelligence, and dependence on the machine as decision-maker. Even some of the leaders in the AI field have expressed serious concern about the application of the more abstract forms of software. For example, Donald Michie argued that "a 'human window' should be built into all computer systems. This should let people question a computer on why it reached a conclusion" (NS 1980).

2.3 Data and Information

The primary focus of the discussion in this section has been on data processing to enable decision-making. Some critics, however, have focussed instead on the data. The reductionist school of thought has committed the error of treating mere data as though it were information, and conflating information with knowledge: "Information has taken on the quality of that impalpable, invisible, but plaudit-winning silk from which the emperor's ethereal gown was supposedly spun" (Roszak 1986, p. ix). By virtue of Shannon's use of the term 'information' in a restrictive manner, "information has come to denote whatever can be coded for transmission through a channel that connects a source with a receiver, regardless of semantic content" (p. 13).

On the contrary, "information, even when it moves at the speed of light, is no more than it has ever been: discrete little bundles of [putative] fact, sometimes useful, sometimes trivial, and never the substance of thought" (p. 87). Rather than being expressible in precise terms, as data, "[human] experience is more like a stew than a filing system" (p. 96). "The vice of the spreadsheet is that its neat, mathematical facade, its rigorous logic, its profusion of numbers, may blind its user to the unexamined ideas and omissions that govern the calculations" (p. 118).

Moreover, "ideas create information, not the other way around. Every fact grows from an idea; it is the answer to a question we could not ask in the first place if an idea had not been invented which isolated some portion of the world, made it important, focused our attention, and stimulated enquiry" (p. 105). Roszak sums up the harm done by the 'Simple Simon' postulates by identifying "the essential elements of the cult of information ... - the facade of ethical neutrality, the air of scientific rigour, the passion for technocratic control" (p. 156).

A further concern arises where a single device is the sole or dominant source of the data on which a human or organisational decision-maker depends. In matters of social policy and political negotiation, it is almost always the case that multiple stakeholders exist, and that they have very different perspectives. A workable solution has to involve accommodation and compromise, and that depends on triangulation among multiple sources of data. Optimised or at least satisficed decisions may be feasible where a single stakeholder exists, as in the case of a corporation evaluating its strategic options. Whereas multi-stakehoder contexts demand not only 'give and take', but also multiple sets of data.

2.4 Quality Factors

The quality of decisions is dependent on the extent to which the basis for making the decision (algorithm, rules or example-set) reflects the requirements, and copes with the diversity of contexts. It also depends on the extent to which the software implementation of the algorithm, the processing of the rules, or the application of the example-set, complies with the specification.

Software quality can only be achieved through careful requirements analysis, design, construction and deployment, and the embedment of ongoing quality assurance measures. Software quality has plummeted in the last three decades, however, due to economic pressures precluding such activities being undertaken well, or even at all. Moreover, organisations avoid paying the premium involved in expressing code in readily-understood and readily-maintained form. Generally, modifications substantially decrease the quality level of software. The inadequate modularisation inherent in most implementations results in changes not only introducing new errors, but also creating instabilities in functions that were previously reliable. The prevailing low quality standards in software undermine the quality of decision-making by computers of all kinds, including those embedded in drones and their supporting infrastructure.

2.5 Conclusions

This section has identified a number of themes in the critical literature concerning the limits of computing. They are summarised here in a form that draws out their implications for the design and deployment of drones.

(1) Reliable and predictable behaviour of drones is only feasible where an unambiguously specific procedure has been defined. Because all models on which computing is based are simplifications of a complex reality, and because meaning is absent within computerised systems, attempts to delegate less than fully-structured decisions to drones will result in unreliable and unpredictable behaviour

(2) Even where the decisions delegated to drones are structured, the reliability of drone behaviour may not be high, because of inadequate quality assurance and inadequate audit

(3) Social and political decision-making contexts and processes inherently involve human qualities that are not expressible in formal terms, including experience, understanding, imagination, insight and expertise, but also compassion, emotion, metaphor, tolerance, justice, redemption, judgement, morality and wisdom. No means exists to encode human values, differing values among various stakeholders, and conflict among values and objectives. The absence of a formal model means that such circumstances cannot be reduced to structured decisions

(4) If drones are designed to operate autonomously, perhaps at any level, but particularly above that of maintaining the aircraft's attitude and direction of movement, decisions that the device takes are very likely to be not well-structured, and hence of a kind that needs to remain under human control

(5) If drones have unstructured decision-making delegated to them, whether by design or by default, there is no reasonable basis for expecting that the decision will correspond to that which would have arisen from a negotiation process among stakeholders. Hence the values embedded in decisions may be accidental, or they may reflect the values of the more powerful among the stakeholders, but are unlikely to reflect the interests of all stakeholders

(6) If drones are programmed using later-generation development tools, such that the rationale for decisions cannot be provided, or is even non-existent, then human operators, even if they retain the ability to exercise control over the drone's behaviour, are `flying blind' and human control over drone behaviour is nominal not real, and the drone behaviour is unauditable. Humans would have in effect abdicated reason and succumbed to incomprehensible machine-made decisions

Computers compute, and computation depends on data. The next section considers what is known about the acquisition of data by computers, and the communication of data to them and from them.


3. Data Communications

Computers may acquire data directly, or it may be delivered from an external source. Where data is acquired directly, the process depends on a sensor that is local to the computer, and more or less closely integrated with it. Sensors detect some external state and adopt some corresponding internal state. The external state may be relatively stable (e.g. temperature), or volatile (e.g. the amplitude of sound waves reaching a receiver). The sensor may ignore the data and overwrite it later, or pass it to a device that can render it as, say, a visible display (e.g. on a screen) or audible sound (e.g. through a speaker), or that can record it, or that can use it as a trigger for some action.

Many sensors are capable of gathering data very frequently, and hence of generating vast quantities of data. It is usual for sensors to pass data onwards less often than they gather it. They generally conduct various forms of pre-processing, such as the removal of outlier measures and the computation of an average over a period of time, and pass that pre-processed data onwards. Sensors require calibration, in order to ensure that they generate data that corresponds in the intended manner with the external state that they are intended to detect and measure. Sensor data may or may not consistently reflect the external state. Data quality arising from sensors can vary across many dimensions. For example, Hossain et al. (2007) focus on accuracy (correspondence of the measure with the physical reality), certainty (better described as consistency, i.e. the degree of variability among multiple detectors that are meant to be measuring the same physical reality), timeliness (the absence of a material delay between collection and forwarding), and integrity (the absence of unplanned manipulation or interference with the data).

Sensors therefore firstly have error-factors associated with the measures that they provide, and secondly need to be subjected to more or less frequent testing and re-calibration. Each item of data arising from a sensor accordingly needs to be handled with caution by drone software, rather than being assumed to be necessarily `factual'.

Where data already exists in some device other than the computer that needs it, or is gathered by a sensor remote from the computer that is intended to receive it, the data has to be sent through some form of communication channel. This may involve physical media (e.g. a `thumbdrive' or other portable storage device), a direct connection or local area network (a physical private network), a wireless connection, or a somewhat indirect physical or wireless connection that depends on intermediary devices (a virtual private network).

Data communications, and particularly tele-communications (i.e. over distance), was effectively married with computing during the 1980s. This gave rise to the notion of information technology (IT) or information and communications technology (ICT). Many different configurations of distributed, inter-organisational, multi-organisational and supra-organisational systems have been devised in order to exploit IT/ICT (Clarke 1992).

Data communications are vulnerable in a range of ways. Channels are generally `noisy', and many mis-transmissions occur. The content of the data may be changed in transit, or intercepted and observed or copied. Data may not arrive, and data may be sent in such a manner that the recipient assumes that it came from one party, when it actually came from elsewhere. Computer scientists commonly refer to the desirable characteristics of data communications as being Confidentiality, Integrity and Availability (the CIA model). A wide range of safeguards needs to be employed to address the risks arising from the many vulnerabilities being impinged upon by Acts of God, accidents, and threatening acts by parties with an interest in data communication processes malfunctioning in some way.

The notion of `ubiquitous computing' emerged in the early 1990s, postulating that computers would quickly come to be everywhere, and would communicate with one another, and with devices carried by passers-by (Wieser 1991). Since the mid-2000s, the term `pervasive computing' has been much-used (Gershenfeld et al. 2004). The term `ambient computing' had emerged even earlier, with a vision that underlying IT infrastructure would be "seamless, customizable and eventually invisible ... to the user" (Davis 2001). These terms present as `fashion items' rather than as articulated constructs, with much vagueness and ambiguity surrounding their meaning, scope and relationship with other terms. Meanwhile, however, large numbers of artefacts have come to contain computers, including both static artefacts (EFTPOS-terminals, ticket-terminals, advertising hoardings, rubbish bins) and artefacts that travel (handsets, packaging, clothing, cars). Moreover, a great many of these computers are promiscuous by design, in the technical sense of automatically communicating with any computer that is within reach.

The risks involved in such openness have been exacerbated by the gradual emergence of the `Internet of Things'. This takes advantage of the marriage of computing and communications to ensure that devices generally have data communications capabilities, and at least one IP-address, and are consequently discoverable, are reachable by any other device that is Internet-connected, and are subject to various forms of interference by them, including the transmission of malware, `hacking' and denial of service attacks. Promiscuity with devices in the vicinity is translating into promiscuity with devices throughout the world. Safeguards are essential, but are very challenging to even devise let alone to deploy and to maintain. Safeguards also work against the interests not only of pranksters, and organised and other criminals, but also of corporations and governments, who prefer devices to be open to them. A review of issues arising is in Weber (2009, 2010).

Drones are utterly dependent on local sensors, remote data-feeds from various terrestrial sources and satellites (particularly GPS), and remote control-feeds from the pilot and facilities operators. Because is the nature of data communications, a drone has to detect and cope with erroneous feeds, and has to cope with the absence of data on which it depends. The data communications insecurities noted in this section give rise to risks to the aircraft's capacity to perform even its most basic functions, including to stay aloft, let alone to apply its more advanced capabilities as intended by its designers, pilot and facilities operators. In order to achieve fail-safe or even fail-soft operation,.

An airborne vehicle is inherently dangerous in that it lacks a rest state like a terrestrial vehicle. Many drones are doubly dangerous. The speed of a rotorcraft can be varied to a considerable extent, and such drones can be made to hover, and to reverse. Fixed-wing aircraft, on the other hand, must maintain appreciable speed in order to sustain flight and can vary that speed less, and less quickly. To cope with circumstances in which power or stabilisation apparatus malfunction, or data and/or control-feeds are deficient, drones need to have fallback autonomous functions intended to ensure safety for people and property. However, concepts such as `fail-safe', `fail-secure', `fail-soft', `fault tolerance' and `graceful degradation' are difficult to define in operational terms, and challenging to implement.

IT/ICT artefacts gather data, compute, and disseminate data. But drones do more than that, in that they also act in and on the world. This means that they fall within the general category of robots. The following section accordingly considers lessons arising from the robotics literature.


4. Robotics

The term 'robot' was coined by Czech playwright Karel Çapek in a 1918 short story, and it spread widely with the success of his 1921 play 'R.U.R.', which stood for 'Rossum's Universal Robots'. Predecessor ideas go back millennia, to clay models into which life was postulated to have been breathed by man - as early as Sumeria and as recently as the golem notion of European Jewry. They have included automata (self-moving things) such as water clocks, homunculi (humans created by man other than by natural means), and androids and humanoids (humanlike non-humans) (Geduld & Gottesman 1978, Frude 1984). The tradition that Mary Shelley had created with Frankenstein's Monster was reinforced by the strongly negative connotations in Çapek's play. Hence, the term 'robot' has been commonly used in fictional literature in which the creation destroys the creator.

Asimov's fictional work from 1940-1990 took a different approach: "My robots were machines designed by engineers, not pseudo-men created by blasphemers" (Asimov, quoted in Frude 1984). As a fledgling industry emerged, it was buoyed by his positive message, and he is generally acknowledged as having had a significant impact on key players in the development of robotic technologies. This section considers the key features of both narrow and broader interpretations of the robotics notion, and identifies implications for drone design and operation.

4.1 The Elements of Robotics

Inspection of the many and varied definitions of robots identifies two key elements:

Two further frequently-mentioned elements, which are implied atttributes rather than qualifying criteria, are:

For the present discussion, an important attribute is the robot's degree of mobility. The following categories are usefully distinguished:

The discussion of drones in the first paper in this series culminated in the following elements being proposed as being definitional of a drone:

These are consistent with the proposition that a drone is a robot, and hence the findings in the literature on robotics are applicable to drones as well.

There is a tendency for the public, and for some designers, to conceive of robots in human form. This is continually reinforced by the frequent re-announcement by `inventors', usually enthusiastic Japanese, of `domestic servant' robots and `cuddly toy' robots. On the other hand, applying the complementariness principle espoused in the previous section, it is likely to be more advantageous to avoid such pre-conceptions, and instead design robots with one or both of the following purposes in mind:

4.2 Distributed Robotics

In its initial conception, a robot was a single entity, with all elements integrated into the same housing. As communication technologies developed, distance between the elements became less of a constraint on design. Moreover, there are various circumstances in which it is advantageous to separate the elements, e.g. to expose as little as possible of the robot-complex to the risk of damage or even destruction, to reduce the size or weight of the moving parts, or to perform actions across a wider area than can be reached with a single active element. This suggests that a more flexible template comprises "powerful processing capabilities and associated memories in [one or more] safe and stable location[s], communicating with one or more sensory and motor devices (supported by limited computing capabilities and memory) at or near the location(s) where the robot performs its functions" (Clarke 1993).

Drones' onboard computational capabilities and operational facilities, combined with the remote pilot's equipment and other data sources such as GPS satellites, represent a close match to that two-decades-old specification. In the industry's preferred terms, Unmanned Aerial/Aircraft Systems (UAS) and Remotely Piloted Aircraft Systems (RPAS) are forms of distributed robotics.

This broader notion of robotics significantly increases the range of artefacts that are encompassed by the notion. It enables a great many organisational and supra-organisational systems to be classified as robots and studied accordingly. The qualifying condition is that the system comprises interdependent elements that both process data and act in the real world. Examples include computer-integrated manufacturing, just-in-time logistics, automated warehousing systems, industrial control systems, electricity and water supply systems, road traffic management and various other forms of intelligent transportation systems, and air traffic control systems.

Perhaps a drone swarm, but certainly a drone squadron, or a drone network comparable to meteorological, seismic and tsunami data collection systems, are easily recognisable as fitting within this broader conception of robotics. For example, Segor et al. (2011) describes experiments relating to collaboration among drones, including the automomous performance of flight-path management and collision avoidance, while overflying an area for mapping purposes.

4.3 Design Challenges

As noted earlier, concern arises about the extent to which software's performance satisfies any requirements statement, and accurately implements any solution specification, that may have been expressed. Clarke (1993) identified a further set of challenges arising from robotic systems, which strike rather more deeply than mere poor-quality programming. Several of these are of direct relevance to drones.

Particularly where later-generation software construction methods are used, there is a need to avoid the computational equivalents of dilemma and 'paralysis by analysis', which potentially arise from equal outcomes, contradiction, deadlock, and deadly embrace, and are capable of leading to what Asimov described as 'roblock' (Asimov 1983).

Requirements for the operation of a device are commonly stated in natural-language terms. Some, such as `the autonomous stabilisation controls must keep the aircraft's attitude within its performance envelope' are capable of being operationalised into detailed specifications. Many others, however, use terms that are ambiguous, context-dependent, value-loaded and/or culturally relative. Great challenges are faced when endeavouring to produce specifications for, say, `identify acts of violence and follow the individuals who have committed them', or even `identify and report suspected sharks' (Hasham 2013).

A further issue is that values are embedded in any design, but they are often implicit, and often incidental, rather than arising through conscious intent. Aspects that appear likely to be important in drone design include attitudes to safety, and the operational definitions of concepts such as `aggressive behaviour'.

4.4 Robot Autonomy

One of the recurrent phrases in dictionary definitions of robotics is 'the performance of physical acts suggested by reason'. This could be seen as a quite minor extension of the longstanding principle of delegation of authority and responsibility to make decisions and take actions, in that the categories of recipient of the delegation now include not only humans and organisations, but also programmed devices.

Examples of robots operating with high degrees of autonomy have included various spacecraft, and 'rovers' landed on the Moon, first by the USSR in 1970, and on Mars by the USA on several occasions since 1997. A significant consideration with robots on spacecraft and Mars is that the distances involved, and the resulting signal-latency, preclude close control from Earth, and hence a high degree of autonomy is essential. However, terrestrial deployment of a substantially autonomous rover is entirely feasible (e.g. Cowan 2013).

The implementation of robotic devices and systems capable of fully-autonomous decision-making and action has been perceived by some to represent a very substantial step for humankind. Asimov depicted a variety of (fictional) circumstances in which "[Robots] have progressed beyond the possibility of detailed human control" (Asimov 1950). A later work encapsulated the condition that pilots find themselves in during automated landing: "For now I must leave you. The ship is coasting in for a landing, and I must stare intelligently at the computer that controls it, or no one will believe I am the captain" (Asimov 1985). Humans lose control to machines where any of the following conditions is fulfilled:

An oft-repeated joke has the world's computers being finally inter-connected and asked the question `Is there a God?', eliciting the response `There is now'. The earliest occurrence of the joke that is readily found is over 50 years old (Farley 1962), but the expression in that paper makes no claim of originality. It was therefore clearly coined very soon after computers emerged. In the culminating works in their robotics threads, both Arthur C. Clarke in 'Rendezvous with Rama' (1973), and Asimov in 'Robots and Empire' (1985), envisaged societies in which robots dominate homo sapiens, and in Clarke's case had entirely replaced the species. (Ironically, Asimov's 1985 vision differs from those of the jeremiahs that he decried in the 1940s only in that robots did not need to use force in order to achieve ascendancy over humans). The sci-fi film 'The Matrix' envisaged an in-between state, in which successors to contemporary robots - virtualised, sentient beings - retained humans only because of their usefulness as a source of energy.

That these visions are not limited to comics and artists is attested to by a quotation of a generation earlier, from the founder of the field of cybernetics, Norbert Wiener: "[T]he machines will do what we ask them to do and not what we ought to ask them to do. ... [I]f we move in the direction of making machines which learn and whose behavior is modified by experience, we must face the fact that every degree of independence we give the machine is a degree of possible defiance of our wishes. The genie in the bottle will not willingly go back in the bottle, nor have we any reason to expect them to be well disposed to us" (Wiener 1949, quoted in Markoff 2013).

Writers in creative literatures regard the delegation to machines of decision-making about humans as demeaning to the human race. Social historians, meanwhile, perceive other forces pointing in much the same direction. The heritage of centuries of humanism in `western' societies is under threat not only from the strong collectivist values of `eastern' societies that are now enjoying both economic and cultural growth, but also from posthumanist values (Bostrom 2005) and even from the possibility of a technology-driven singularity (Vinge 1993, Moravec 2000). Robots, some in the form of drones, might represent a tipping-point in the transition, threatening at least the meaningfulness of human existence, and even the survival of homo sapiens, due to the rise of roboticus sapiens (a race of `intelligent robots', as envisaged by Arthur C. Clarke and Asimov) and/or homo roboticus (a race of `roboticised humans', discussed in the following section). If technological development does continue unabated, the question arises as to whether humans of the mid-to-late 21st century will retain control, cede it in a measured manner, or sleepwalk their way into ceding it unconditionally.

4.5 Conclusions

Discussions of the implications of robotics abound, e.g. Wallach & Allen (2008), Veruggio & Operto (2008) and Lin et al. (2012). Asimov investigated the possibilities of imposing controls in the form of his Three Laws of Robotics, and a deep study of his corpus of works identified several additional laws implicit in his tales (Clarke 1993). However, those propositions, well-known though they are mong roboticists and the public more generally,appear to have had little or no effect on the design of robots, on industry or professional codes of conduct, or on regulatory frameworks, anywhere.

As humankind seeks to exploit the potentials that drones offer, a set of principles is needed to provide protection against the potentially very serious harm that can arise from their uncontrolled design and application. Application of the preceding discussion to airborne robots gives rise to the following proposals:

  1. Delegate to drones only structured decisions
    One reason for humans retaining responsibility for unstructured decision making is an arational preference by humans to submit to the judgements of their peers rather than of machines: 'If someone is going to make a mistake costly to a human, better for it to be an understandably incompetent human than a mysteriously incompetent machine'. A second reason, however, is rational: in unstructured contexts, appropriately educated and trained humans may more often make acceptable decisions and/or less often make unacceptable decisions, than would a machine. Using common sense, humans can recognize when conventional approaches and criteria do not apply, and they can introduce conscious value judgements
  2. Ensure that humans remain legally responsible for the consequences of actions, whether or not the actions themselves have been delegated to a drone
    The information technology industry has succeeded in avoiding the extension of product liability laws to software, whereas machines that use software have to date been generally subject to product liability laws. That may come under challenge as the level of drone autonomy increases
  3. Require a human-accessible rationale for decisions made by drones
    This is a vital enabler of human responsibility, and a means of denying accidental delegation of decisions to machines in such a manner that they cannot be understood, and hence cannot be controlled
  4. Mandate design features whereby humans retain sufficient authority over drones' behaviour to fulfil their responsibilities
    This requires that all drones detect boundary-conditions and hand control back to humans, and that humans have the means to revoke any authority delegated to drones. Where no capacity exists to perform a function through means other than a robotic system, a human decision-maker must have the capacity to willingly forego the performance of that function
  5. Mandate design features that achieve fallback, fail-safe operation under all circumstances
    This requires that all drones detect equipment and communications malfunctions, and default to actions that have been planned to minimise the likelihood of harm
  6. Educate humans to appreciate the limitations of robotic systems
    Humans responsible for drones need to keep in mind such key concepts as:

A further body of critical literature is relevant, not directly to drones, but to their remote pilots and facilities operators. The following section considers relevant aspects of technological enhancements to human beings.


5. Cyborgism

This section considers a further body of work, still emergent, which relates to the interfacing and/or integration of computing and machine-elements to humans, and in particular to humans who exercise control over drone functions. Currently, most technological tools are external to humans, but some are very close to the physical person, and interventions with, and implantation into, the human body are emergent.

5.1 Prosthetes, Orthots and Cyborgs

The term 'cyborg' originally referred to an enhanced human being who could survive in extraterrestrial environments (Clynes & Kline 1960). The notion can be traced further back, of course, e.g. "humans involved in colonizing space should take control of their evolutionary destiny through genetic engineering, prosthetic surgery, and hard-wired electric interfaces between humans and machines that would allow them to attach a new sense organ or ... a new mechanism ..." (Bernal 1929, p. 26, quoted in Gray 1999).

Drawing on sources as diverse as the OED, Mann & Niedzviecki (2001) and FIDIS (2008), the definitional features of a cyborg are that:

In order to establish a sufficient basis for analysis of the impacts of cyborgisation, Clarke (2005, 2011) proposed that a number of distinctions be made. The first is a cluster of definitions relating to prostheses:

The second set of terms relates to the more challenging circumstances in which the intervention goes beyond merely making good a deficiency in comparison with human norms:

Building on those two definitions, it is possible to bring some precision to the notion 'cyborg':

Endo-orthoses were for a time only evident as a staple element in sci-fi short stories and novels; but that has ceased to be the case. The first (temporary) pacemaker was inserted into a patient in Australia in 1928, and the first pacemaker implantation was performed in Sweden in 1958. Imposition of RFID-enabled exo-orthoses (initially, anklets) began in the USA in 1983. As late as 1991, chip implantation in humans was widely seen as being fanciful, and discussion in the technical literature was inferred by at least one journal editor as unprofessional. Yet implantation in animals had already commenced in about 1990, and the first in humans occurred no later than 1998 (Masters & Michael 2006, Michael & Michael 2009). A mere 15 years after being irresponsible speculation, RFID chips were set to be applied as endo-orthoses (Foster & Jaeger 2007).

The intellectual distinction between 'making good' and 'making better' is arguably valuable; but it is not always easily operationalised. The most-celebrated cyborg, Oscar Pistorius, having established that he was a prosthete and not an orthot, and hence qualified to compete in the Olympic Games, complained that a similarly equipped amputee, who competed against him in the Paralympics, was an orthot - on the basis that his competitor's somewhat longer artificial legs made his stride longer than that of a person of equivalent (presumed) height (Bull 2012).

The following section shows how the cyborg notion is directly applicable to individuals who perform the functions of drone pilot and facilities operators.

5.2 Drone Pilot Cyborgisation

Drone pilots rely on data-feeds, but the large volume of data may be pre-processed, analysed and presented in various forms, to assist in the visualisation of the physical reality. Examples include multi-screen displays, split-screen displays, head-up displays, window-size manipulation, image-zoom functions, 3D projections of the space around the drone, image enhancement, and overlays. To compose and communicate their commands to the drone, pilots and operators may use keyboards, buttons, knobs, pointing devices such as mice, roller-balls and track-pads, joysticks, gesture-based interfaces and wired gloves. These are all variously external orthoses and exo-orthoses. Under the definitions presented in the previous section, drone operators are cyborgs.

A longstanding area of exo-orthosis development is wearable computing and in particular wearcams (Mann 1997). This has resulted in both a significant experimental base in academe, and a substantial socio-technical following that self-identifies as cyborg-loggers or 'gloggers'. (The term appears to have been perceived by at least Wikipedia's editors to be so notorious that the Wikipedia page has been deleted). A latecomer to the wearcam field, Google Glass, has the advantage of capital resources, and a followerdom whose motivations are more strongly economic than social, and whose attitude is even less circumspect than Mann's most committed devotees.

Although conceived as a means to enhance the experience of an individual within their current environment, many of the capabilities of wearcams involve feeds of remote data, and some extend to the ability to influence remote activities. What began as very cumbersome, external orthoses have become highly convenient, form-fitting (and even stylish) exo-orthoses. In the quintessential cyberpunk novel, Neuromancer, Molly Millions' eye-lids were replaced with screens that provide vision-enhancements (Gibson 1984). A variety of less dramatic endo-orthoses may not be far away.

Also not far away may be workable combinations of the human-computer interface features mentioned above which deliver comprehensive `tele-presence' or `virtual reality' capabilities to drone pilots and facilities operators. As long ago as 1994, Bruce Sterling's sci-fi novel 'Heavy Weather' built on the notion of 'storm followers', blending the motivations of environmentalists and thrill-seekers, and in the process describing future weather-surveillance drones. His 'ornithopters' were "hollow-boned winged flying drones with clever foamed-metal joints and a hide of individual black plastic 'feathers' ... [and] binocular videocams at the width of human eyes ... [that also] measure temperature, humidity, and wind-speed" (Sterling 1994, pp. 72-73, 88). The pilot's controls comprised "goggles, headphones, his laptop, and a pair of ribbed data gloves ... [enabling] aerial telepresence ... gently flapping his fingertips ... groping at empty air like a demented conjurer" (pp.75, 82). An engineering text published less than a decade later discussed devices with "the ability to 'feel' virtual or remote environments in the form of haptic interfaces" (Bar-Cohen & Breazeal 2003, p. xiv). The principles can be applied not only to gloves, but also to conventional control devices, in particular joysticks. On both the receptor and effector sides of remote control of drone behaviour, a transition is in train from external to at least exo-orthoses, with some forms of endo-orthoses in short-term prospect.

This section has described the cyborgisation of drone controllers through physical enhancements; but the psychological dimension is relevant as well. A remote pilot who remains close to the drone they are controlling has 'less skin in the game' than an onboard pilot. A distantly remote pilot is operating at even further remove, not only in space, but also in the sense of being detached from the local context and cultures. In both cases, elements of 'computer games mentality' may creep in. Operators are performing in contexts similar to those used in 'shoot 'em up' games. Considerable risk exists of psychological and social constraints that operate in the person's real world being dulled, and hence of some degree of de-humanisation. An interview with a retired USAF drone observer and pilot reported that "when flying missions, he sometimes felt himself merging with the technology, imagining himself as a robot, a zombie, a drone itself" (Power 2013).

5.3 Cyborg Rights

Discussion has already commenced about whether robots might gain legal rights and responsibilities. A cyborg is a human at the outset, and does not cease to be a human as a result of prosthetic function-recovery or of orthotic enhancement. It is therefore to be expected that a cyborg would have all of the rights and responsibilities of persons generally.

An investigation was undertaken into the kinds of rights that humans might claim in order to protect themselves against cyborgisation, and that cyborgs might seek, above and beyond those of a normal human (Salleh 2010, Clarke 2011). Also in 2011, but independently, the 'Cyborg Foundation' was established in Catalunya, to, among other things "defend cyborg rights".

Some claims of rights are justified on the basis of achieving an equitable outcome for otherwise disadvantaged individuals. For example, there are already circumstances in which a human can claim a right to a quality of life external prosthesis such as a hearing-aid, and there is an even stronger argument for a right to a life-and-death endo-orthosis such as a pacemaker. A likely further cyborg claim is for legal authority to make use of their enhanced functionality, such as night-vision, faster running-speed or stronger grip. The claim might be advanced in the first instance by, or on behalf of, people engaged in warfare, or law enforcement. There is a very small step from there to civilian security roles and civilian investigations (e.g. by 'private detectives', and by journalists); and on to environmental, animal rights, consumer and other activists, sportspeople and hobbyists.

The significance of this for drone operators is that decisions about, for example, pursuit or arrest, or about whether or not to fly along a path or into a space, may be made by a drone operator while under the influence of a virtual reality environment. People affected by the drone's behaviour may seek to sue or prosecute the drone operator (e.g. for negligence, harassment, false imprisonment, trespass, or unlawful interference with an authorised service such as search and rescue or fire-fighting); whereas the drone operator may counter that they have a right to rely on data-feeds, pattern-based inferencing software, image-enhancement and visualisation software.

One further literature is considered in the following section because of its relevance to the impact of drones on behavioural privacy.


6. Surveillance

From the outset, a prominent category of applications of both military and civilian drones has been observation and data gathering. The literature on surveillance technology and practices may therefore also enhance the framework within which drones' impacts and implications, and the regulation of drones, can be analysed.

Surveillance is a general concept that involves a watch kept over some target, over a period of time. The target may be a location or one or more objects, including human beings. A great many highly valuable applications exist, and many of them may have limited and manageable negative impacts, or might be readily designed with controls and mitigating features that satisfactorily address the negative impacts. Surveillance targeted at humans, usefully defined as "the systematic investigation or monitoring of the actions or communications of one or more persons" (Clarke 1988), can also have considerable value, but the negative impacts are very likely to be much more substantial, and much more difficult to manage (Lyon 1994, 2008).

Surveillance has adopted various forms (Clarke 2009, 2013b). Until recent centuries, monitoring people was of necessity a human-intensive activity, involving watching and listening. Its resource-intensiveness limited its use. Technologies were developed to enhance visual surveillance (such as telescopes) and aural surveillance (such as directional microphones). Photography emerged to provide visual recordings, and parallel developments enabled sounds to be stored and accessed retrospectively.

Communications surveillance began with interception of the post ('mail covers', as copying of the outside of a mailed item is called, since 2001 implemented in the USA as a mass surveillance method). Electronic surveillance developed in lock-step with the telegraph in the mid-19th century (Peterson 2012), has been embedded in each new development, through telephone and telex, and on to the Internet. Whereas the physical post has enjoyed very strong protections for the contents of an envelope, the succession of electronic surveillance techniques has shown decreasing respect for human rights. The facts of message transmission (originally `call data', now sometimes misleadingly called `metadata') have been treated by national security and law enforcement agencies as though their interception had no civil liberties implications. Moreover, these agencies have in many countries taken advantage of the accidental conversion of communications from ephemera to stored messages to gain access to a great deal of the content of human communications.

Dataveillance emerged from the mid-twentieth century onwards, as computing was applied to the management of personal data (Clarke 2013). It has grown dramatically in intensity and extensiveness. Real-time data about individuals' locations has become readily available only since the emergence of ATMs in the 1970s and EFTPOS in the early 1980s. It became massively more intensive and intrusive following the adoption of the mobile phone, in analogue form during the 1980s, in digital form from the beginning of the 1990s, and especially since the widespread reporting of GPS-derived coordinates from 2005 onwards and wifi network-based location techniques such as Skyhook since about 2007 (Clarke & Wigan 2011, Michael & Clarke 2013, Stanley 2013).

A further form of surveillance technology involves direct observation and tracking of a feature of a person, or of an artefact that is very closely associated with a person, or that is embedded within the person's body. The prospect increasingly needs to be considered of 'überveillance', in several senses (Michael & Michael 2007). The scope exists for surveillance to be applied across all space and all time (omni-presence), enabling an organisation to become all-seeing and even all-knowing (omniscience), at least relative to some object, place, area or person. Location monitoring, combined with data from other forms of surveillance, provides `big data' collections to which `data mining' techniques can be applied, in order to draw a wide array of inferences about the behaviour, interests, attitudes and intentions of individuals, of which some are reasonably accurate, some inaccurate, and some simply spurious (Wigan & Clarke 2013). A visual representation of the level of intrusiveness that is being achieved is in Stanley (2013).

Impact analysis of surveillance activities needs to reflect key aspects including 'of what?', 'for whom?', 'by whom?', 'why?', 'how?', 'where?' and 'when?' (Clarke 2009). Impacts on individuals need to take into consideration the various dimensions of privacy (Clarke 2006). The discipline of Surveillance Impact Analysis is emergent, to address these issues (Wright & Raab 2012).

Most discussions of privacy protections are limited to the privacy of personal data, which in many countries is subject to at least some limited form of data protection law. Privacy of personal communications is also subject to laws, but, particularly in the contexts of computer-based networks, these are continually shown to be very weak. Privacy of the physical person is coming under increasing assault in recent decades, with impositions such as fingerprinting and other forms of biometrics and body-fluid acquisition for such purposes as substance-abuse testing and genetic testing. Involuntary or coerced imposition of exo-orthoses (such as RFID anklets) and implantation of endo-orthoses add to the threats, as does exploitation of orthoses adopted willingly for other purposes (such as mobile phones, tablets and wrist-worn appliances). Pursuit drones can be designed to take advantage of RFID planted in vehicles, in items commonly carried by the individual, and in the physical person.

Privacy of personal behaviour is of particular relevance to this analysis, because drones enable a substantial change in surveillance capabilities. They significantly change visual surveillance in at least three ways:

Drones can be applied not only to visual surveillance, but also to the monitoring of many characteristics of a wide range of phenomena, using near-human-visible image and video (in particular in the infra-red spectrum), radio and other electronic transmissions, sound in the human-audible spectrum, air-pressure waves of other frequencies, biological measures, magnetic and other geophysical data, and meteorological data.

The scope exists for correlation with other sources, such as government and private sector databases, human communications, social networking services, and interests evident from text that an individual has (electronically) read and images that they have viewed. Opportunities exist not only for the pursuit of people who have arguably committed a criminal act, but also for the interception of people who are inferred to intend the commission of a criminal act, en route to the inferred scene of their inferred intended crime. Many false inferences are inevitable, giving rise to a great deal of collateral harm to individuals' rights and more generally to social and political processes.

One category of impact will be on people who are wrongly accused and whose quiet enjoyment is unjustifiably interfered with. Further, with such massive intrusions into human rights, it is inevitable that some individuals will become sullen acceptors of their constrained fate in a collectivist surveillance society. Others will seek ways to avoid, subvert and fight back against the impositions. Steve Mann's wearcam innovations were specifically intended for the purposes of monitoring of the powerful by the weak. Rather than 'sur'veillance from above, Mann writes about 'sous'veillance, from below, reflecting the bottom-up nature of citizen and consumer use of wearcams (Mann et al. 2003, Mann 2009). As the industrialisation of wearcams begins, initiated by Google Glass, applications to both sur- and sous-veillance will abound, and there will be loud demands for a new balance to be established (Clarke 2014).

Substantial surveillance threats to free society already exist, and drones are adding a further dimension. By looming above people, and by following them relatively unhindered in comparison with terrestrial stalking, drones could well usurp the CCTV camera as the popular symbol for the surveillance society. The prospect of surveillance drones being human-controlled is chilling enough. The idea that drones may operate as autonomous guardian angels-and-devils, making decisions independently of human operators, adds to the aura of dehumanisation.


7. Conclusions

A drone is a computer that flies, and has the capability to take action in the real world. In addition to having potentials unique to themselves, drones therefore inherit limitations from computers, from data communications, and from robots. Drone pilots and facilities operators, meanwhile, are cyborgs, dependent on devices to enable them to visualise their drone's context and exercise control over its behaviour. Surveillance is a secondary function of drones generally, and the primary function of many of them. Insights from the surveillance literature assist in appreciating drones' impacts and implications.

Computers have great strengths in dealing with structured decision-making. On the other hand, unstructuredness has many dimensions. The architecture of digital computers cannot reflect the subtleties and ambiguities of natural languages. It cannot cope with the existence of multi-stakeholder contexts, which are incompatible with a simple objective function that can be (mathematically) optimised or at best satisficed. The richness of complex realities cannot be adequately reduced to simplistically-structured models. Hence the technocratic approach of 'the machine thinks, therefore I should let it make decisions' results in decision-making processes that are unacceptably deficient. Except where decision-making is highly structured, computing must be conceived as a complement to human decision-making and not as a substitute for it. This applies to the computers in drones as it does to all other computers.

During the early decades of computing, an algorithm was essential before a computer could be applied to a category of problems. Although highly complex problems require concentration, humans can 'play computer' and thereby re-construct the reason for any particular decision. With 5th generation approaches, on the other hand, the focus has shifted from a problem and its solution to a model of a problem-domain, and no clear specification of the problem exists. With 6th generation approaches, even the model of the problem-domain is implicit. Under those circumstances, no rationale underlies a decision. If determinations are made in such ways, they can be neither explained nor meaningfully justified. The delegation of important decisions affecting people to a device that functions in such ways is fraught with dangers, including where the device is on board a drone or part of a drone's supporting infrastructure.

Drones are utterly dependent on local data-feeds from their sensors, and remote data-feeds and control-feeds. To cope with interference and loss of data, it is essential that drones have contingent fail-soft operations designed-in.

Robotics can be seen either as extending a machine by adding computational capabilities, or as extending a computer by adding mechanical capabilities. Beyond standalone devices, many systems that manage manufacturing processes, logistics and energy reticulation need to be analysed as forms of distributed robotics. The many design challenges for effective robotic systems include the need for robot behaviour to be subject to constraints, and robot autonomy to be subject to supervision, testing for violations of boundary-conditions, and a switch whereby manual control can be quickly and smoothly resumed. The need for these features is greater for mobile robots than for static ones, and all the more important in the case of aerially mobile drones.

Cyborgisation involves humans being subject to interventions that replace missing functionality (prostheses) or provide new or enhanced functionality (orthoses). Drone pilots are heavily dependent on external orthoses, and increasingly exo-orthoses, to deliver data-feeds and render them in human-usable forms, and to enable them to exercise control over their vehicle. Endo-orthoses, with tighter integration with human cognition and effectors, are emergent. These facilities create a dream-world, whose intensity and detachment from physical reality are exacerbated by their use for games as well. This creates serious risk of drone pilot behaviour lacking the natural controls of personal conscience, professional responsibility, and social mores.

Surveillance has become increasingly extensive and intensive, approaching pervasiveness and continuousness. Drones greatly expand the scope for surveillance in the visual spectrum and beyond, and for contributing streams of content to support data surveillance. Such applications of drones threaten substantial negative impacts on personal, social, economic and political behaviour. Technological ingenuity has broken the natural controls of availability and expense. It would be naive to anticipate that 'sous'veillance, from beneath, can provide a sufficient degree of counterbalance against 'sur'veillance by powerful organisations.

Together, the perspectives offered by these critical literatures provide a depth of appreciation of the nature of drones. The third and fourth articles in the series build on the first two by addressing the two most serious clusters of threats arising from drone design and deployment - to public safety and to behavioural privacy - and assessing the extent to which current and prospective regulatory arrangements appear likely to manage the threats.


References

Asimov I. (1950) 'The Evitable Conflict' (originally published in 1950), reprinted in Asimov I. 'I Robot' Grafton Books, 1968, pp. l83- 206

Asimov I. (1983) 'The Robots of Dawn' Grafton Books, 1983

Asimov I. (1985) 'Robots and Empire' Grafton Books, 1985

Bar-Cohen Y. & Breazeal C. (eds.) (2003) 'Biologically-Inspired Robots' SPIE - The International Society for Optical Engineering, 2003

Beer S. (1973) `Fanfare for Effective Freedom: Cybernetic Praxis in Government' Lecture, 14 February 1973, reprinted in Beer S. `Platform for Change' Wiley, 1975, pp. 421-452

Bolter J.D. (1986) 'Turing's Man: Western Culture in the Computer Age' The North Carolina University Press, 1984; Pelican, 1986

Bostrom N. (2005) 'Transhumanist Values' Review of Contemporary Philosophy 4 (May 2005), at http://www.nickbostrom.com/ethics/values.pdf

Bull A. (2012) 'Oscar Pistorius angry at shock Paralympics 200m loss' The Guardian, 3 September 2012, at http://www.theguardian.com/sport/2012/sep/03/paralympics-oscar-pistorius-angry-loss

Clarke A.C. (1973) 'Rendezvous with Rama' Victor Gollancz, 1973

Clarke R. (1988) 'Information Technology and Dataveillance' Comm. ACM 31,5 (May 1988) Re-published in C. Dunlop and R. Kling (Eds.), 'Controversies in Computing', Academic Press, 1991, at http://www.rogerclarke.com/DV/CACM88.html

Clarke R. (1989) 'Knowledge-Based Expert Systems: Risk Factors and Potentially Profitable Application Areas' Xamax Consultancy Pty Ltd, January 1989, at http://www.rogerclarke.com/SOS/KBTE.html

Clarke R. (1991) 'A Contingency Approach to the Application Software Generations' Database 22, 3 (Summer 1991) 23-34, at http://www.rogerclarke.com/SOS/SwareGenns.html

Clarke R. (1992) 'Extra-Organisational Systems: A Challenge to the Software Engineering Paradigm' Proc. IFIP World Congress, Madrid (September 1992), at http://www.rogerclarke.com/SOS/PaperExtraOrgSys.html

Clarke R. (1993) 'Asimov's Laws of Robotics Implications for Information Technology' IEEE Computer 26,12 (December 1993) pp.53-61 and 27,1 (January 1994), pp.57-66, at http://www.rogerclarke.com/SOS/Asimov.html

Clarke R. (2005) 'Human-Artefact Hybridisation: Forms and Consequences' Proc. Ars Electronica 2005 Symposium on Hybrid - Living in Paradox, Linz, Austria, 2-3 September 2005, at http://www.rogerclarke.com/SOS/HAH0505.html

Clarke R. (2006) 'What's 'Privacy'? Workshop Presentation for the Australian Law Reform Commission, Xamax Consultancy Pty Ltd, July 2006, at http://www.rogerclarke.com/DV/Privacy.html

Clarke R. (2009) 'A Framework for Surveillance Analysis' Xamax Consultancy Pty Ltd, August 2009, at http://www.rogerclarke.com/DV/FSA.html

Clarke R. (2011) 'Cyborg Rights' IEEE Technology and Society 30, 3 (Fall 2011) 49-57, at http://www.rogerclarke.com/SOS/CyRts-1102.html

Clarke R. (2013) 'From Dataveillance to Ueberveillance' Interview, in Michael K. & Michael M.G. (eds.) 'Uberveillance: Social Implications', IGI Global, 2013, at http://www.rogerclarke.com/DV/DV13.html

Clarke R. (2014) `The Regulation of Point of View Surveillance: A Review of Australian Law' Forthcoming in IEEE Technology & Society 33, 2 (Jun 2014), PrePrint at http://www.rogerclarke.com/DV/POVSRA.html

Clarke R. & Wigan M.R. (2011) 'You Are Where You've Been: The Privacy Implications of Location and Tracking Technologies' Journal of Location Based Services 5, 3-4 (December 2011) 138-155, at http://www.rogerclarke.com/DV/YAWYB-CWP.html

Clynes M.E. & Kline N.S. (1960) 'Cyborgs and Space' Astronautics, September 1960, pp. 26-27 and 74-75; reprinted in Gray, Mentor, and Figueroa-Sarriera, eds. 'The Cyborg Handbook' New York: Routledge, 1995, pp. 29-34

Cowan P. (2013) 'CSIRO looks to robots to transform satellite accuracy' it News, 30 September 2013, at http://www.itnews.com.au/News/358605,csiro-looks-to-robots-to-transform-satellite-accuracy.aspx

Davis J.M. (2001) `An Ambient Computing System' University of Kansas, 2001, at http://fiasco.ittc.ku.edu/research/thesis/documents/jesse_davis_thesis.pdf

Dreyfus H.L. (1972) 'What Computers Can't Do' Harper & Row, 1972, at http://archive.org/stream/whatcomputerscan017504mbp/whatcomputerscan017504mbp_djvu.txt

Dreyfus H.L. (1992) 'What Computers Still Can't Do: A Critique of Artificial Reason' MIT Press, 1992. A revised and extended edition of Dreyfus (1972)

Dreyfus H.L. & Dreyfus S.E. (1986) 'Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer' Free Press, 1986

Farley E. (1962) `The Impact of Information Retrieval on Law Libraries' U. Kan. L. Rev. 11, 3 (1962-1963) 331-342

Feigenbaum E. & McCorduck P. (1983) 'The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World', Michael Joseph. 1983

FIDIS (2008) 'A Study on ICT Implants' D12.6, Future of Identity in the Information Society, 30 September 2008, at http://www.fidis.net/fileadmin/fidis/deliverables/fidis-wp12-del12.6.A_Study_on_ICT_Implants.pdf

Foster K.R. & Jaeger J. (2007) 'RFID Inside' IEEE Spectrum, March 2007, at http://www.spectrum.ieee.org/mar07/4939

Frude N. (1984) 'The Robot Heritage' Century Publishing, 1984

Geduld H.M. & Gottesman R. (eds) (1978) 'Robots, Robots, Robots' New York Graphic Soc., 1978

Gershenfeld N., Krikorian R. & Cohen D. (2004) 'The Internet of Things' Scientific American 76, October 2004

Gibson W. (1984) 'Neuromancer' Grafton/Collins, London, 1984

Goleman D. (1996) 'Emotional intelligence: Why it can matter more than IQ' Random House Digital, Inc.

Gray C.H. (2001) 'Cyborg Citizen' Routledge, 2001

Hartree D.R. (1949) 'Calculating Instruments and Machines' New York, 1949 (as cited in Turing 1950)

Hasham N. (2013) `NSW government looks to drones for shark attack prevention' The Sydney Morning Herald, 11 December 2013, at http://www.smh.com.au/nsw/nsw-government-looks-to-drones-for-shark-attack-prevention-20131210-2z44y.html

Hossain M.A., Atrey P.K. & El Saddik A. (2007) 'Modeling Quality of Information in Multi-sensor Surveillance Systems' Proc. IEEE 23rd International Conference on Data Engineering Workshop, April 2007, pp. 11 - 18, at http://www.mcrlab.uottawa.ca/index.php?option=com_docman&task=doc_download&gid=242&Itemid=66

Kurzweil R. (2005) 'The Singularity is Near' Viking Books, 2005

Lin P., Abney K. & Nekey G.A. (eds.) (2012) 'Robot Ethics: The Ethical and Social Implications of Robotics' MIT Press, 2012

Lyon D. (1994) `The Electronic Eye: The Rise of Surveillance Society' University of Minnesota Press, 1994

Lyon D. (2008) `Surveillance Society' Talk for Festival del Diritto, Piacenza, Italia, 28 September 2008, at http://www.festivaldeldiritto.it/2008/pdf/interventi/david_lyon.pdf

McCorduck P. (1979) 'Machines Who Think' W.H.Freeman, 1979

Mann S. (1997) 'An historical account of the 'WearComp' and 'WearCam' inventions developed for applications in 'Personal Imaging'' Proc. ISWC, 13-14 October 1997, Cambridge, Massachusetts, pp. 66-73, at http://www.wearcam.org/historical/

Mann S. (2009) 'Sousveillance: Wearable Computing and Citizen 'Undersight' - Watching from Below Rather Than Above' h+ Magazine, 10 July 2009, at http://www.hplusmagazine.com/articles/politics/sousveillance-wearable-computing-and-citizen-undersight

Mann S. & Niedzviecki H. (2001) 'Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable Computer' Random House, 2001

Mann S., Nolan J. & Wellman B. (2003) 'Sousveillance: Inventing and Using Wearable Computing Devices...' Surveillance & Society 1, 3 (2003) 331-355, at http://www.surveillance-and-society.org/articles1(3)/sousveillance.pdf

Markoff J. (2013) 'In 1949, He Imagined an Age of Robots' The New York Times, 20 May 2013, at http://www.nytimes.com/2013/05/21/science/mit-scholars-1949-essay-on-machine-age-is-found.html?

Masters A. & Michael K. (2006) 'Lend me your arms: the use and implications of humancentric RFID' Electronic Commerce Research and Applications 6,1 (2006) 29-39, at http://works.bepress.com/kmichael/40

Medina E. (2006) `Designing Freedom, Regulating a Nation : Socialist Cybernetics in Allende's Chile' J. Lat. Amer. Stud. 38 (2006) 571-606, at http://elclarin.cl/web/images/stories/PDF/edenmedinajlasaugust2006.pdf

Michael K. & Clarke R. (2013) 'Location and Tracking of Mobile Devices: Überveillance Stalks the Streets' Computer Law & Security Review 29, 3 (June 2013) 216-228, at http://www.rogerclarke.com/DV/LTMD.html

Michael M.G. & Michael K. (2007) 'Überveillance: 24/7 x 365 People Tracking and Monitoring' Proc. 29th International Conference of Data Protection and Privacy Commissioner, at http://www.privacyconference2007.gc.ca/Terra_Incognita_program_E.html

Michael M.G. & Michael K. (2009) 'Uberveillance: Microchipping People and the Assault on Privacy' Quadrant LIII, 3 (March 2009) 85-89, at http://ro.uow.edu.au/cgi/viewcontent.cgi?article=1716&context=infopapers

Miller D.W. & Starr M.K. (1967) 'The Structure of Human Decisions' Prentice-Hall, 1967

Moravec H. (2000) 'Robot: Mere Machine to Transcendent Mind' Oxford University Press, 2010

Newell A. & Simon H.A. (1972) 'Human Problem-Solving' Prentice-Hall , 1972

NS (1980) 'Computers That Learn Could Lead to Disaster' New Scientist, 17 January 1980, at http://www.newscientist.com/article/dn10549-computers-that-learn-could-lead-to-disaster.html

Peterson J.K. (2012) 'Understanding Surveillance Technologies: Spy Devices, Privacy, History, and Applications' Auerbach Publications, 2007, 2012

Power M. (2013) `Confessions of a Drone Warrior' GQ Magazine, 23 October 2013, at http://www.gq.com/news-politics/big-issues/201311/drone-uav-pilot-assassination

Roszak T. (1986) 'The Cult of Information' Pantheon 1986

Salleh A. (2010) 'Cyborg rights 'need debating now'' ABC News, 4 June 2010, at http://www.abc.net.au/science/articles/2010/06/04/2916443.htm

Samsonovich A.V. (2012) 'An Approach to Building Emotional Intelligence in Artifacts' Technical Report on Cognitive Robotics WS-12-06, Association for the Advancement of Artificial Intelligence, 2012, at http://www.aaai.org/ocs/index.php/WS/AAAIW12/paper/viewFile/5338/5581

Segor F., Bürkle A., Kollmann M. & Schönbein R. (2011) `Instantaneous Autonomous Aerial Reconnaissance for Civil Applications: A UAV based approach to support security and rescue forces' Proc. 6th Int'l Conf. on Systems (ICONS), at http://link.springer.com/article/10.1007/s10846-010-9492-x

SEP (2010) 'Connectionism' Stanford Encyclopedia of Philosophy, 27 Jul 2010, at http://www.science.uva.nl/~seop/entries/connectionism/

Simon H.A. (1960) 'The Shape of Automation' reprinted in various forms, 1960, 1965, quoted in Weizenbaum J. (1976), pp. 244-245

Simon H.A. (1996) 'The Sciences of the Artificial' 3rd ed. MIT Press. 1996

Stanley J. (2013) `Meet Jack. Or, What The Government Could Do With All That Location Data' American Civil Liberties Union, 5 December 2013, at https://www.aclu.org/meet-jack-or-what-government-could-do-all-location-data

Sterling B. (1994) 'Heavy Weather' Phoenix, 1994

Turing A.M. (1950) 'Computing Machinery and Intelligence' Mind 59, 236 (Oct 1950) 433-460

Ulam S. (1958) `Tribute to John von Neumann' Bulletin of the American Mathematical Society 64, 3 Part 2 (May 1958), at https://docs.google.com/file/d/0B-5-JeCa2Z7hbWcxTGsyU09HSTg/edit?pli=1

Veruggio G. & Operto F. (2008) 'Roboethics: Social and Ethical Implications of Robotics' Chapter in Springer Handbook of Robotics, 2008, pp 1499-1524

Vinge V. (1993) 'The Coming Technological Singularity: How to Survive in the Post-Human Era' Whole Earth Review (Winter 1993), at http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html

von Neumann J. (1958) 'The Computer and the Brain' Yale University Press, 1958

Wallach W. & Allen C. (2008)  'Moral Machines: Teaching robots right from wrong, Oxford University Press, 2008

Weber R.H. (2009) `Internet of things - Need for a new legal environment? Computer Law & Security Review 25, 6 (Nov-Dec 2009) 522-527

Weber R.H. (2010) `Internet of Things - New security and privacy challenges' Computer Law & Security Review 26, 1 (Jan-Feb 2010) 23-30

Weiser M. (1991) 'The Computer in the 21st Century' Scientific American 94, September 1991

Weizenbaum J. (1976) 'Computer Power and Human Reason' W.H.Freeman & Co. 1976, Penguin 1984

Wigan M.R. & Clarke R. (2013) `Big Data's Big Unintended Consequences' IEEE Computer 46, 6 (June 2013) 46 - 53, at http://www.rogerclarke.com/DV/BigData-1303.html

Wright D., Friedewald M., Gutwirth S., Langheinrich M., Mordini E., Bellanova R., de Hert P., Wadhwa K. & Bigo D. (2010) 'Sorting out smart surveillance' Computer Law & Security Review 26, 4 (July 2010) 343-54

Wright D. & Raab C.D. (2012) 'Constructing a surveillance impact assessment' Computer Law & Security Review 28, 6 (December 2012) 613-626

Wyndham J. (1932) 'The Lost Machine' originally published in 1932, reprinted in A. Wells (ed.) (1973) 'The Best of John Wyndham' Sphere Books, London, 1973, pp. 13- 36


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in the Cyberspace Law & Policy Centre at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 16 August 2013 - Last Amended: 26 February 2014 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/SOS/Drones-I.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy