Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Evaluation of EC's AI Regulation'

Would the European Commission's Proposed Artificial Intelligence Act
Deliver the Necessary Protections?

Review Draft of 31 August 2021

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2021

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/DV/EC21.html


Abstract

Alarm about the human impact of artificial intelligence (AI) has given rise to many expressions of 'ethical principles for responsible AI'. In April 2021, the European Commission (EC) proposed a regulatory instrument for what it refers to as 'AI systems', but which are more appropriately described as data analytics tools. This article reports on the results of an assessment of the extent to which the EC's proposal implements a super-set of 50 Principles that was developed by consolidating 30 published documents. The conclusion is that the proposed Artificial Intelligence Act is an abject failure, imposing no safeguards whatsoever in respect of a great many uses of data analytics, and extremely weak safeguards on only a proportion of even those 'AI systems' that the Commission acknowledges as being 'high-risk'.


Contents


1. Introduction

Data protection law now encompasses a vast collection of statutes and determinations of regulatory agencies, tribunals and courts, across the world (Greenleaf 2021a). Data protection law and practice are affected by a variety of technologies, including techniques variously referred to as data mining, data analytics and data science, and artificial intelligence (AI), particularly in such forms as machine learning (AI/ML), by applications of those technologies by corporations and government agencies, and by embedding them in information and communications technology (ICT) artefacts and services.

To the extent that laws exist, or are enacted, that regulate such technologies and their uses, those laws either fall within the orbit of data protection law, or at least intersect in material ways with data protection law. This article examines the proposal of the European Commission (EC) for what it refers to as an Artificial Intelligence Act (AIA), but which, it is argued below, would be more appropriately called a Data Analytics Act. Whether or not the EC's Proposal is enacted, it has considerable potential to influence directions internationally, not only among the 27 members of the Europe Union, but also far beyond them (Greenleaf 2021b). It is therefore an important document to submit to critical evaluation.

During the period 2015-2020, Artificial Intelligence (AI) has been promoted with particular vigour. The nature of the technologies, combined with the sometimes quite wild enthusiasm of the technologies' proponents, have given rise to a considerable amount of public concern about AI's negative impacts. In an endeavour to calm those concerns, many organisations have published lists of 'ethical principles for responsible AI'.

Some of these have been short lists published by corporations, and nakedly designed to serve their self-interest. Other sources have included industry associations, public interest advocacy organisations, individual researchers, and government agencies whose primary concern is innovation or economic development. A small number of them, primarily published by organisations with an ethical or law reform orientation, have adopted a balanced approach to the issues. Notable within this last category are the document of a High-Level Expert Group of the European Commission (EC) called 'Ethics Guidelines for Trustworthy AI' (EC 2019), and a Report of the Australian Human Rights Commission (AHRC 2021).

The EC Expert Group's 'Ethics Guidelines for Trustworthy AI' was launched in April 2019. Exactly two years later, in April 2021, the European Commission announced "new rules and actions for excellence and trust in Artificial Intelligence", with the intention to "make sure that Europeans can trust what AI has to offer". The document's title was a 'Proposal for a Regulation on a European approach for Artificial Intelligence' (EC 2021), and the proposed statute is termed the Artificial Intelligence Act (AIA).

The announcement stimulated some enthusiasm among the public and public interest advocacy groups, which had been very concerned about the attempts by AI industry proponents to implement dangerous technologies. This article reports on an evaluation of the EC's Proposal against a consolidated set of Principles for Responsible AI. It commences with an overview of the EC Proposal, and then explains the assessment process adopted. The results of the assessment are presented, with access provided to a series of Annexes that contain the detailed analysis whereby the conclusions were reached.


2. Overview of the EC Proposal

The body of the EC's Proposal is long, and comprises multiple sections:

The EC's Proposal is stated to be "a balanced and proportionate horizontal regulatory approach to AI that is limited to the minimum necessary requirements to address the risks and problems linked to AI, without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market" and "a proportionate regulatory system centred on a well-defined risk-based regulatory approach that does not create unnecessary restrictions to trade, whereby legal intervention is tailored to those concrete situations where there is a justified cause for concern or where such concern can reasonably be anticipated in the near future" (EM, p.3).

The motivation is quite deeply rooted in economics in general and innovation in particular. For example the purpose is expressed as being "to improve the functioning of the internal market by laying down a uniform legal framework" (Pre(1)). Compliance with human rights and data protection laws, and public trust, are treated as constraints to be overcome, e.g. the EC's Proposal is depicted as a "regulatory approach to AI that is limited to the minimum necessary requirements ..., without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market" (EM1.2, p.3).

The document is formidable. The style is variously eurocratic and legalistic. The 51 pp. of Articles require careful reading and cross-referencing, and need to be read within the context of the 22 pp. of Preamble, and taking into account the 16 pp. of (legally relevant) Explanatory Memorandum. Complicating the analysis, the underlying philosophy and the origins of the features are both somewhat eclectic. Serial readings and thematic searches were complemented by reference to publications that provide reviews and critiques of the document, in particular Veale & Borgesius (2021) and Greenleaf (2021b).

The EC's Proposal is for a regulatory instrument, and hence its coverage is broader than that of documents whose focus is solely on 'ethical principles'. In particular, it distinguishes four segments, establishes a prohibition on one of them, and applies differential principles to two of the other four. The distinctions are summarised in Table 1. It was therefore necessary to conduct four assessments, one in respect of each segment.

Table 1: The EU Proposal's Four 'Levels of Risk'

(1) Prohibited AI Practices (Art. 5, Pre 26-69)

(a) Subliminal techniques

(b) Exploitation of vulnerabilities of the disadvantaged

(c) Social scoring ("the evaluation or classification of the trustworthiness of natural persons ...")

(d) 'Real-Time' remote biometric identification in public places for law enforcement

(2) High-Risk AI Systems (Arts. 6-7, 8-51, Annexes II-VII, Pre 27-69, 84)

A 'high-risk AI system' is defined, in a complex manner, in Art. 6 and Annex III, as:

(a) Product or product safety components - but subject to the presumably very substantial exemptions declared in Art. 2.1. It appears that the drafters assume that the 6 nominated Regulations and 2 Directives are technologically-neutral and by some untested process can deliver equivalent protections to those in the 'AIA' provisions; and

(b) 21 very specific categories of AI systems within 8 "areas" (Annex III).

(3) 'Certain AI Systems' (Art. 52, Pre 70), also referred to in EM s.2.3 (p.7), ambiguously, as "non-high-risk AI systems"

The categories are declared and described as follows:

  1. "AI systems intended to interact with natural persons" (Art. 52.1), but with particular reference to categories of customer-facing automata currently referred to as "chatbots" (EM p.3)
  2. "an emotion recognition system" (Art. 52.2, but referred to in EM4 on p.14 as an AI system "used to detect emotions" - which is arguably a rather different description), except where "permitted by law to detect, prevent and investigate criminal offences, unless those systems are available for the public to report a criminal offence"
  3. "a biometric categorisation system" (Art. 52.2, but referred to in EM4 on p.14 as "an AI system to determine association with (social) categories based on biometric data" - which is distinctly different from the Art. 52 definition, because of 'determine' cf. 'categorise' and 'social categorisation' cf. categorisation in the abstract) , except where "permitted by law to detect, prevent and investigate criminal offences"
  4. "an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful ('deep fake')" except where it is:

It is alarming that the EM Proposal appears to embody legal authorisation of 'deep fake' techniques to "detect, prevent, investigate" (presumably as a means of misleading witnesses and suspects), and, even worse, to "prosecute" (which would represent the creation of false evidence).

(4) All Other AI Systems (Art. 69, EM 5.2.2, Pre 81-82)

These are referred to variously as "AI systems other than high-risk AI systems" (Art. 69), "low or minimal risk [uses of AI]" (EM 5.2.5), and "non-high-risk AI systems" (EM 5.2.7, Pre 81).

_______________

The term used in respect of Categories (2)-(4) is 'AI Systems', which is defined in Article 3(1). A different term used in relation to category (1) Prohibited: 'AI Practices'. The term 'AI practices' appears not to be defined. It is unclear what purpose the drafters had in mind by using a distinctly different term. It is even more unclear how the term will be interpreted by organisations, and, should it ever be relevant, by regulators or the courts. It can be reasonably anticipated that the use of the term results in loopholes, which may or may not have been intended.

In addition, the details of each category of 'AI practices' embody significant qualifiers, which have the effect of negating the provisions in respect of a considerable number of them.

For example, the first is subject to a series of qualifying conditions: "subliminal techniques beyond [beneath?] a person's consciousness [i] in order to [ii] materially [iii] distort a person's behaviour [iv] in a manner that causes or is likely to cause that person or another person physical or psychological harm" (Art. 5.1(a)).

The second (exploitation of vulnerabilities) embodies an even more impressive suite of escape-clauses: "[i] exploits any of the vulnerabilities [ii] of a specific group of persons [iii] due to their age, physical or mental disability, [iv] in order to [v] materially [vi] distort the behaviour of a person [vii, and impenetrably] pertaining to that group [viii] in a manner that causes or is likely to cause that person or another person physical or psychological harm" (Art. 5.1(b)).

The third (social scoring) contains 121 words, and even more, and even more complex, qualifying conditions that the second (Art. 5.1(c)).

The fourth (biometric identification) excludes (i) retrospective identification, (ii) proximate rather than remote identification, (iii) authentication (1:1 matching), (iv) non-public places, and (v) all uses other than for law enforcement (Art. 5.1(d), 5.2-5.4).

Further, many more biometric identification applications are expressly excluded from the scope, if they are "strictly necessary" for (Art. 5.1(d)):

In all three cases, the EC Proposal authorises mass biometric surveillance, and does so in order to find needles in the haystack. Such applications are nominally subject to prior authorisation, but that can be retrospective if the use is claimed to be urgent. So mass biometric surveillance can be conducted at will, subject to the possibility of an ex post facto day of reckoning.

Some of the vast array of exceptions that escape categorisation as (1) 'Prohibited AI Practices' may fall within (2) 'High-Risk AI Systems', but many others likely will not, and hence fall within (4) 'All Other Systems' and hence escape the regulatory scheme entirely.

In relation to (2) High-Risk AI Systems, many of the 8 areas and 21 specific categories are subject to multiple and wide-ranging exclusion criteria, whose effect is to greatly reduce the number of AI systems that will actually be within-scope.

One example of an "area" is "Biometric identification and categorisation of natural persons" (III-1). The category is limited by the following criteria: "[i] intended to be used for [ii] the `real-time' and [iii] `post' [iv] remote biometric [v] identification [vi] of natural persons" (III-1(a)). This excludes systems used without intent, either 'real-time' or 'post' but not both, proximate rather than 'remote', and for authentication (1-to-1) rather than identification (1-among-many).

Another "area" is "Access to and enjoyment of essential private services and public services and benefits" (III-5), but with the categories limited to "AI systems [i] intended to be used [ii] by public authorities or on behalf of public authorities [iii] to evaluate the eligibility of natural persons for public assistance benefits and services, [iv] as well as to grant, reduce, revoke, or reclaim such benefits and services (III-5(a)). This excludes systems used without intent, 'private services and benefits' in their entirety (e.g. privately-run school-bus services, even if 'essential') - despite the nominal inclusion of "private services" in the "area" - and use for evaluation but without at least one of grant, reduce, revoke, or reclaim (due to the use of the conjunction 'as well as'). Further, AI systems to evaluate creditworthiness or establish a credit score exclude "small scale providers for their own use" (Art. 5(b)).

For such AI systems as do not fit into the array of escape-clauses, the statutory obligations involve (Art. 8):

Although the EC is required to "assess the need for amendment of the list in Annex III once a year following the entry into force of this Regulation" (Art. 84.1), the wording of Art. 84.7 may or may not create an obligation to actually propose amendments to the list, and the EC may or may not comply, and needs may or may not give rise to changes in the law of the EU.

The question is complicated by Art. 7, which empowers the EC to add further high-risk AI systems into Annex III, but does not require it to do so. In any case, the categories of AI systems are limited to the eight areas already specified in Annex III, and the criteria to be applied are complex and long, providing ample reasons to avoid adding AI systems to the list.

In the case of category (3) 'Certain AI Systems', the descriptions of the four sub-categories are lengthy, and are readily interpreted as being definitional, and some specific exceptions are declared. Hence many such AI systems are likely to be excluded from scope.

This category is orthogonal to 'levels' (2) and (4). Hence any particular AI system in category (3) may fall into (3) only, or (3) and (2), or (3) and (4).

In relation to category (4) All Other AI Systems, the EC Proposal imposes no regulatory requirements whatsoever on categories of AI or its application that fall ouside 'levels' (1)-(3). It merely requires 'encouragement and facilitation' of voluntary industry and organisational codes: "Providers of non-high-risk AI systems should be encouraged to create codes of conduct intended to foster the voluntary application of the mandatory requirements applicable to high-risk AI systems. Providers should also be encouraged to apply on a voluntary basis additional requirements" (Pre para. 81).

This is non-regulation. Further, the reference-point against which the appropriateness of such codes might be measured is indicated as "the mandatory requirements applicable to high-risk AI systems" . As the following analysis shows, this is a very limited set of principles. It is unclear why the reference-point is not instead the EC's own 'Ethics Guidelines for Trustworthy AI' (EC 2019). The cumulative effect of these factors is further discussed in s.4.1.


3. The Evaluation Process

In order to conduct an evaluation of the EC's Proposal, a benchmark is needed. The approach adopted comprises three elements. At the heart of the work is an assessment of the substantive content of the Proposal against a consolidated set of Principles for Responsible AI. As a complement to that assessment of the substantive protections contained in the Proposal, it is necessary to take into account the multi-dimensional scope of applicability of the provisions. In addition, evaluative comments are offered on the likely effectiveness of the regulatory regime that the statute would bring into existence.

The consolidated set of 50 Principles which is used as the benchmark derives from prior work of the author, published in Clarke (2019a) and available in HTML, and in PDF. This drew on 30 documents that were available in April 2019, published by a diverse array of organisations. Each set was analysed, and similar principles were blended together, resulting in a consolidated set of 10 Themes and 50 Principles.

The first use of the consolidated set was as a basis for evaluating each of the 30 publications it was drawn from. The set was subsequently used to evaluate the OECD's AI Guidelines of 22 May 2019 (Clarke 2019b) and the 'AI Ethics Principles' of the Australian Department of Industry of November 2019 (Clarke 2019c). This evaluation of the EC Proposal was accordingly the 33rd conducted, but the first of a draft statute.

The text of the EC Proposal was examined, with primary focus on the proposed legislation (pp. 38-88), but with reference also to the Explanatory Memorandum (pp. 1-16) and the Preamble (pp. 17-38). Elements were noted whose coverage relates to those in the consolidated set of 50 Principles. The details are provided in supporting Annexes, one for each of the categories addressed by the Proposal.

The primary category addressed is referred to as 'High-Risk AI Systems'. For this category, a further supporting Annex displays, for each of the 50 Principles the text in the proposed legislation that is relevant to that Principle. The following section presents the results.

Previous assessments of 32 different sets of 'principles for responsible AI' considered each set against each of the 50 consolidated Principles, and scored either 1 point or 0 point for each. During those assessments, the analysis "scored documents liberally, recognising them as delivering on a Principle if the idea was in some way evident, even if only some of the Principle was addressed, and irrespective of the strength of the prescription" (Clarke 2019a, p.416). This intentional liberality of scoring appeared to be appropriate given that the documents were aspirational 'ethical guidelines', and were not intended to be enacted into law.

The EC Proposal under consideration here, on the other hand, is expressly a draft statute, intended to become EU law. Whereas sets of unenforceable 'ethical' guidelines can be reasonably evaluated in isolation, statutory requirements are situated within a legal context. The EU's relatively strong human rights and data protection laws need to be factored in - although it is far beyond the scope of the study reported here to delve deeply into questions of their scope and effectiveness in the context of 'AI' 'practices' and 'systems'.

Because liberal scoring approach used previously is less appropriate to the formal regulatory context, this assessment varied the scoring approach. Rather than a binary value (0 or 1), a score was assigned for each Principle within an 11-point range {0.0, 0.1, 0.2, ..., 0.9, 1.0}. In order to maintain some degree of comparability with the previous 32 assessments, a second, binary score was assigned by rounding up, i.e. 0.0 rounds to 0, but all scores in the range 0.1 to 1.0 round to 1.

No genuinely 'objective' procedure is practicable. Each score was assigned subjectively, based on a combination of the author's perception of the extent of coverage, the scope of exemptions, exceptions and qualifications, and the apparent practicability of enforcement. The scores were assigned by the author, and reviewed by the author again some weeks afterwards in order to detect instances of inappropriate scores, and adapted accordingly. Needless to say, each person who applies this approach to the data will come up with variations, which may result in both systematic and stochastic differences in assessments.

In order to provide a quantitative summation, the 32 previous assessments added the individual scores (with a maximum of 50), and expressed the result as a percentage. This of course implicitly weights each of the 50 Principles identically, and implicitly weights each of the 10 Themes according to the number of Principles within that Theme. The same approach is adopted here. The rounded / binary scores (0 or 1) result in a percentage that enables a reasonable degree of comparability of the EC Proposal's coverage against the 32 'ethical guidelines'. Addition of the scores on the more granular, 11-point scale provides an estimation of the EC's Proposal's efficaciousness. As no other Proposal has as yet been assessed (or, indeed, seen), there is not yet any other proposal for law that this measure can be compared against.


4. The Results

The main body of the assessment, reporting on the coverage of The 50 Principles that the EC Proposal achieves, is in section 4.2. Because the Proposal is for law to be enacted, it is necessary to precede that with an examination of the law's scope of applicability. The final sub-section provides some observations on the proposed law's effectiveness in relation to the primary objective of stimulating economic activity, and the secondary or qualifying purpose of establishing a regulatory instrument.

4.1 Scope of Applicability

The definition of what it is that the law is to apply to is the subject of several somewhat tortuous segments of the EC's Proposal. They are extracted in Annex 5. For the proposed law to apply:

The sub-term 'AI' is used in a pragmatic manner, but aspects of it may be foreign to some people. It is defined to include several sub-sets of the field of AI as that somewhat vague term is conventionally understood. One of those is "logic-based approaches", and another is "knowledge-based approaches" including rule-based expert systems. The other sub-set has been in recent years referred to as AI/ML (machine learning). No other aspects of AI (such as pattern recognition, natural language understanding, robotics or cyborgisation) are included in the definition.

However, the term AI is defined to also include further categories of 'approaches and techniques' that are not conventionally referred to as AI: "Statistical approaches, Bayesian estimation, search and optimization methods". These pre-date the coinage of the term 'AI' in 1955, and have been more commonly associated with operations research and data mining / 'data analytics'.

No definition is provided for the key term used in 'level' (1), 'AI Practices', so no additional scope limitations are apparent apart from those discussed in section 2 above. For the other three 'levels', however, the term 'AI system' is defined as "software that is developed [using AI approaches and techniques, in the sense just discussed] ..." (Art. 3(1)). The term 'system' conventionally refers to a collection of interacting elements that are usefully treated together, as a whole. Moreover, those elements, while they include software, may also include other categories, particularly hardware, and, at least in the socio-technical context, people and perhaps organisations as well. It is feasible to conceive the term, as used in the EC Proposal, as being intended to encompass all forms in which software presents, whether or not integrated into a system, and whether or not the system comprises elements other than software. On the other hand, there is ample scope for legalistic debate in and out of the court-room as to what is and is not an 'AI system', and hence for loopholes to be exploited.

A further limitation in Art. 3(1) is that the artefact must be able to "generate outputs", and it appears that these must reflect or fulfil "human-defined objectives". It may also prove to be somewhat problematic that the examples of "outputs" extend to "decisions influencing the environments they interact with", but no mention is made of actuators, nor to any other term that relates to autonomous action in the real world. It would seem both incongruous and counterproductive for AI-driven robotics to be excluded from the regulatory regime; yet, on the surface of it, that may well be the case.

The field of view adopted in the EC proposal has a considerable degree of coherence to it, in that it declares its applicability to a number of categories of data analytics techniques and approaches, whose purpose is to draw inferences from data. Unfortunately, it uses the term 'AI systems' to refer to that set. This is misleading, and potentially seriously so. Misunderstandings are bound to arise among casual readers, the popular media and the general public, but quite possibly also among many people in relevant professional and regulatory fields.

Moreover, the term is arguably so misleading that the use of the term 'AI Act' could be seen as a materially excessive claim in relation to the scope of the Proposal, because a great deal of AI is out-of-scope. Added to that (and beneficially), a considerable amount of non-AI is within-scope. A more descriptive term would be 'Data Analytics Act'.

The primary source used in the present study as a basis for evaluating the EC Proposal is The 50 Principles (Clarke 2019a). These relate to AI as a whole, and do not go into detail in relation to data analytics. Because the EC Proposal defines 'AI Systems' not in a conventional manner but as something similar to '{advanced} data analytics', it is necessary to supplement The 50 Principles with an additional reference-point in the form of 'Guidelines for the Responsible Application of Data Analytics'. This was developed from the literature and appeared in the same journal as The 50 Principles (Clarke 2017).

Further scope-exemptions arise in the case of 'High-Risk AI Systems'. See s.2 and Table 1. This is by virtue of the complex expressions in Arts. 2.1(a), 2.1(b), 2.2 (which intersects with Art. 6), 2.3 and 2.4. This possibly releases from any responsibility a great many instances of AI systems that are recognised as embodying high risk.

Two relevant categories of entity are defined. The notion of 'provider' is defined in a sufficiently convoluted manner that scope may exist for some entities that provide relevant software or services into the EU to arrange their affairs so as to be out-of-scope.

The term 'user of an AI system' is less problematic. A 'natural person' is included, but the exclusion clause for "used in the course of a personal non-professional activity" would appear to have the effect that a considerable amount of abusive behaviour by individuals is out-of-scope.

The geographical location scope-definition is convoluted, and hence the law might apply in circumstances in which any of the provider, the provider's action or the system is within the EU, and in which any of the user, the system or the use of the output is within the EU. However, that may remain unclear unless and until the courts have ruled on the meanings of the provisions.

The timeframe scope-definition is unclear. It presumably applies from some date a yet-to-be-defined period after enactment, and is otherwise without constraints.

An express exclusion exists where the purposes are exclusively military.

4.2 Coverage of the Principles

This sub-section summarises the results of the assessment of the extent to which the EC Proposal satisfies the reference-point, the consolidated 50 Principles. Because the EC Proposal divides AI Systems into 4 'levels', this analysis is of necessity divided into 4 sub-sections.

(1) Prohibited AI Practices

In the case of Prohibited AI practices, the Principles are by definition not relevant. However, a great many AI systems that appear to fall into a Prohibited category are not prohibited, because of the many exclusions and exemptions whose presence is drawn to attention in section 2 above.

All of those excluded and exempted AI systems fall into either category (2) or (4), and may fall into category (3) as well. Hence the extent to which the EC Proposal satisfies the 50 Principles in respect of those AI systems depends on which category/ies they fall into.

(2) High-Risk AI Systems

The majority of the relevant passages are in Chapter 2 (Requirements) Arts. 8-15 and Chapter 3 (Obligations) Arts. 16-29, but a number of other, specific Articles are also relevant.

The following is a summary of the lengthy assessment which is available in Annex 1 and Annex 4, and the statistical analysis in Annex 6:

As described in section 3, one assessment adopts a liberal approach, scoring 1 point for each of the 50 Principles if the idea is in some way evident, even if only partially or weakly:

The other assessment assigns a subjective score for the extent of the coverage of each of the 50 Principles, on an 11-point scale: {0.0, 0.1, 0.2, ... 1.0}:

The disjunction between the EC Proposal (EC 2021) and the earlier 'Ethics Guidelines for Trustworthy AI' (EC 2019) is striking. Key expressions in the earlier document, such as 'Fairness', 'Prevention of Harm', 'Human Autonomy', 'Human agency', 'Explicability', 'Explanation', 'Well-Being' and 'Auditability', are nowhere to be seen in the body of the EC Proposal. The term 'stakeholder participation' occurs a single time (and even then as a merely optional feature in organisations' creation process for voluntary codes). The term 'auditability' occurs, but not in a manner relating to providers or users, i.e. not as a regulatory concept.

The magnitude of the shortfall may be gauged by considering key Principles that are excluded from the protections, even for the (possibly quite low) proportion of high-risk AI systems that the EC Proposal has defined to be within-scope. Table 2 lists key instances.

Table 2: Missing Principles for High-Risk AI Systems

Excerpts from The 50 Principles (Clarke 2019a)

1. Assess Positive and Negative Impacts and Implications

1.6 Conduct consultation with stakeholders and enable their participation in design

1.7 Reflect stakeholders' justified concerns in the design

1.8 Justify negative impacts on individuals ('proportionality')

2. Complement Humans

2.1 Design as an aid, for augmentation, collaboration and inter-operability

2.2 Avoid design for replacement of people by independent artefacts or systems, except in circumstances in which those artefacts or systems are demonstrably more capable than people, and even then ensuring that the result is complementary to human capabilities

3. Ensure Human Control

3.6 Avoid deception of humans

3.7 Avoid services being conditional on the acceptance of AI-based artefacts and systems

4. Ensure Human Safety and Wellbeing

4.3 Contribute to people's wellbeing ('beneficence')

4.6 Avoid the manipulation of vulnerable people , e.g. by taking advantage of individuals' tendencies to addictions such as gambling, and to letting pleasure overrule rationality

6. Deliver Transparency and Auditability

6.3 Ensure that people are aware of inferences, decisions and actions that affect them, and have access to humanly-understandable explanations of how they came about

7. Embed Quality Assurance

7.1 Ensure effective, efficient and adaptive performance of intended functions

7.3 Justify the use of data, commensurate with each data-item's sensitivity

7.6 Ensure that inferences are not drawn from data using invalid or unvalidated techniques

7.8 Impose controls in order to ensure that the safeguards are in place and effective

7.9 Conduct audits of safeguards and controls

8. Exhibit Robustness and Resilience

8.3 Conduct audits of the justification, the proportionality, the transparency, and the harm avoidance, prevention and mitigation measures and controls

______________

Despite the acknowledgement by the EC Proposal that substantial harm may arise from these AI systems, those responsible for the development and deployment of such systems do not need to consult with affected parties, nor enable them to participate in the design, nor take any notice even where concerns are expressed, nor even ensure that negative impacts are proportionate to the need.

AI systems can be implemented with decision-making and action-taking capabilities, without consideration as whether they should instead be conceived as complementary technology, and as decision aids. Unless they fall into the narrow terms of 'certain systems', as discussed in section 4.2(3) immediately below, high-risk AI systems are permitted to deceive humans. Organisations can impose AI systems on individuals as a condition of contract, and as a condition of receiving a private benefit and even a public benefit, irrespective of the risks the person thereby faces.

There is no obligation to contribute to human wellbeing. There is no obligation to avoid the manipulation of the vulnerable. There is not only no need to explain decision rationale to people, but also no need to even be capable of doing so. The quality assurance requirements are limited, and lack even a need to ensure that the techniques used are legitimate to apply to the particular kind of data that is available, or that the data is of sufficient quality to support decision-making. No safeguards are needed, no controls to ensure the safeguards are working, and no audits to ensure the controls are being applied. Yet this purports to be effective regulation of high-risk applications of powerful technologies, many of which are new, experimental and poorly-understood.

AI systems of course do not exist in a legal vacuum, and organisational behaviour that exploits the enormous laxness of the EC's Proposal may in due course be found to be in breach of existing laws. That, however, will require litigation, considerable resources, and considerable time, and, during that time, potentially very substantial harm can be inflicted on people, without any ability for them to understand, avoid or obtain redress for, unjustified decisions, actions and harm. Moreover, these deficiencies apply to (a sub-set of) AI systems that the EC acknowledges to be 'high-risk'.

(3) 'Certain AI Systems'

A single Article, of about 250 words, imposes a very limited transparency requirement on four highly-specific categories of AI systems (Art. 52). It applies only to AI systems that interact with humans, 'emotion recognition systems', ' biometric categorisation systems' and 'deep fake systems', but in each case with substantial qualifications and exceptions outlined in Table 1.

The analysis in Annex 2 shows that the Article merely requires that the people subjected to such AI systems be informed that it is an AI system of that kind. This is a very limited contribution to Principle 6.1 (Ensure that the fact that a process is AI-based is transparent to all stakeholders), and adds an infinitesimal score to the totals summarised in the previous section. For a great many AI systems, on the other hand, this would be the sole requirement of the regulatory regime.

(4) All Other AI Systems

The very last Article in the EC Proposal, and hence presumably an afterthought, is a requirement of the EC and EU member states to "encourage and facilitate the drawing up of codes of conduct intended to foster the voluntary application to AI systems other than high-risk AI systems of the requirements set out in Title III, Chapter 2" (Art. 69.1). The analysis in Annex 3 suggests that this is close to valueless.

In section 4.2(2) above, it was noted that even within-scope high-risk AI systems are free from any requirements in respect of a wide range of what various sets of 'Ethical Guidelines for Responsible AI' declare as being Principles. The myriad AI systems that fall outside the scope of the EC Proposal suffer not only from all of those inadequacies, but also from the absence of the quite basic protections that are applicable to high-risk systems, as indicated in Table 3.

Table 3: Additional Missing Principles for All Other AI Systems

Excerpts from The 50 Principles (Clarke 2019a)

1. Assess Positive and Negative Impacts and Implications

1.4 Conduct impact assessment, including risk assessment from all stakeholders' perspectives

3. Ensure Human Control

3.1 Ensure human control over AI-based technology, artefacts and systems

3.2 In particular, ensure human control over autonomous behaviour of AI-based technology, artefacts and systems

3.5 Ensure human review of inferences and decisions prior to action being taken

4. Ensure Human Safety and Wellbeing

4.1 Ensure people's physical health and safety ('nonmaleficence')

4.4 Implement safeguards to avoid, prevent and mitigate negative impacts and implications

6. Deliver Transparency and Auditability

6.1 Ensure that the fact that a process is AI-based is transparent to all stakeholders

6.2 Ensure that data provenance, and the means whereby inferences are drawn from it, decisions are made, and actions are taken, are logged and can be reconstructed

7. Embed Quality Assurance

7.2 Ensure data quality and data relevance

7.7 Test result validity, and address the problems that are detected

8. Exhibit Robustness and Resilience

8.1 Deliver and sustain appropriate security safeguards against the risk of compromise of intended functions arising from both passive threats and active attacks, commensurate with the significance of the benefits and the potential to cause harm

8.2 Deliver and sustain appropriate security safeguards against the risk of inappropriate data access, modification and deletion, arising from both passive threats and active attacks, commensurate with the data's sensitivity

8.4 Ensure resilience, in the sense of prompt and effective recovery from incidents

9. Ensure Accountability for Obligations

9.1 Ensure that the responsible entity is apparent or can be readily discovered by any party

9.2 Ensure that effective remedies exist, in the form of complaints processes, appeals processes, and redress where harmful errors have occurred

______________

In respect of most AI systems, user organisations are not required to conduct an assessment of the impacts on and implications for affected parties. They are subject to no requirement to ensure human control, even over autonomous behaviour. They have no obligations to avoid, prevent or mitigate harm. AI systems are permitted to be used in secret, yet there are no requirements even that the decision rationale made can be reconstructed in order to, for example, explain the basis of a decision to a tribunal or a court in camera, let alone explained to those who suffer harm as a result. Quality assurance measures are optional, and no testing in real-world contexts is needed. There is no requirement for robustness even against passive threats let alone active attacks. No accountability mechanisms are imposed.

If AI systems (qua data analytics techniques) were to deliver even a small proportion of the magic that its proponents promise, existing laws will be grossly inadequate to cope with its ravages. The law is slow, expensive, unpredictable and unadaptive. In any case, many of the Principles that the drafters of 'ethical principles' envisage as being needed to cope with the impacts of AI are not currently established in law. Previous technologies were much less powerful, artefact autonomy was in the realm of sci-fi, and inferences drawn by technology were intermediated by human beings rather than being acted on by devices. It will require long debate in abstruse terms before series of courts, contested expert evidence, deep misunderstandings, and many false starts and appeals before some semblance of order can be restored, and protections against unreasonable designs, uses, decisions and actions, established.

4.3 Effectiveness as a Regulatory Instrument

The analysis immediately above suggests that the EC's Proposal:

The EC's Proposal is not a serious attempt to protect the public. It is very strongly driven by economic considerations and administrative convenience for business and government, with the primary purposes being the stimulation of the use of AI systems. The public is to be lulled into accepting AI systems under the pretext that protections exist. The social needs of the affected individuals have been regarded as a constraint not an objective. The many particularities in the wording attest to close attention being paid to the pleadings of advocates for the interests of providers of AI-based goods and services, and of government agencies and corporations that want to apply data analytics techniques with a minimum of interference. The draft statute seeks the public's trust, but fails to deliver trustworthiness.

4.4 Effectiveness as a Stimulatory Instrument

The assessment reported in this article has been conducted with a primary focus on the EC Proposal's regulatory impact, but mindful of its "twin objective of promoting the uptake of AI and of addressing the risks associated with certain uses of such technology" (EM 1.1, p.1). The very substantial failure of the EC Proposal from a regulatory perspective makes clear that its role as a stimulant for technology and economic activity has dominated the design. The strong desire to achieve the public's trust is apparent throughout the EM and Preamble; but the shallowness of the offering is apparent throughout the analysis.

There are many indicators of the importance accorded to innovation, in the form of the entire omission of even quite basic regulatory measures, in the vast array of exemptions from scope even in the case of so-called 'high-risk AI systems', in the completeness of the exemption of AI systems that are defined to be outside scope, in the manifold exceptions that apply even to those systems that are within-scope, and in the large numbers of loopholes provided, whether by error or intent.

The Proposal even contains an explicit authorisation in support of providers and user organisations, which compromises existing legal protections: "Personal data lawfully collected for other purposes shall be processed for the purposes of developing and testing ... innovative AI systems ... for safeguarding substantial public interest [in relation to crime, public safety and public health and environmental quality] ... where ... requirements cannot be effectively fulfilled by processing anonymised, synthetic or other non-personal data" (Art. 54.1).

This appears to authorise what would otherwise be expropriation of personal data in breach of data protection laws. It is in a draft law whose purpose is ostensibly to "facilitate the development of a single market for lawful, safe and trustworthy AI applications" (EM1.1, p.3), based on "Article 114 of the Treaty on the Functioning of the European Union (TFEU)" (Pre(2), p.17), which allows the EU to regulate those elements of private law which create obstacles to trade in the internal market".

Remarkably, even if the overriding purpose of the EC Proposal is to stimulate the business of data analytics, it is still very likely a failure. The deficiencies it evidences are so great that serious harm would arise, and would be readily attributable to data analytics. Trust by the public, rather than being achieved, would be destroyed. The misjudgement inherent in the EC Proposal invites the onset of the next 'AI Winter' (Dickson 2018).


5. Conclusions

This article has presented the results of an assessment of the EC's 2021 Proposal. The primary benchmark used was a set of '50 Principles for Responsible AI'. This was consolidated from a large sample of 30 such documents in mid-2019. The '50 Principles' have previously been used to assess many 'ethical guidelines', including those published by the EC in 2019. A customised assessment method was necessary, however, partly because the EC's Proposal is the first occasion that draft legislation has been published, but also because the central notion of 'AI system' is defined in an unusual way, to mean 'data analytics application'.

The EC's announcement in April 2021 (EC 2021) proclaimed that "The new AI regulation will make sure that Europeans can trust what AI has to offer". The results of the assessment do not support the proposition that the EC Proposal can deliver trustworthiness - although it could have public relations value to AI proponents by misleading the public into trusting untrustworthy AI technology and applications. The effectiveness of the EC Proposal is undermined by a wide array of features that render it almost entirely valueless as a regulatory measure. The fatal weaknesses include scope-limitations; unconventional uses of key terms; the absence of any protections whatsoever in respect of potentially very large numbers of 'AI systems'; a single, very weak requirement imposed on a limited set of systems that are intended to interact directly with people; pitifully weak, vague and in many cases probably unenforceable protections in respect of a limited range of what the EC refers to as 'high-risk' systems; no requirements that inferences drawn by AI systems be capable of rational explanation; and no capacity for individuals or their advocates to seek corrections and redress for faulty inferencing.

The gulf that exists between the EC's 'Ethics Guidelines' of 2019 and the EC's Proposal of 2021 suggests that pleading by the AI industry, supported by government agency and corporate users, has resulted in the Expert Group being entirely ignored, and such a substantial emphasis being placed on industry stimulation that social considerations have been relegated to the role of constraints, resulting in a largely vacuous draft statute.

It would be seriously harmful to the EU if such a grossly inadequate Proposal were to become part of the law of the EU. There would also be serious implications outside the EU if it were enacted, or even if it were used as a reference-point in other countries. It is important that appropriate regulatory regimes emerge, both to protect people from irresponsible applications of technology and to guide and enable the use of such applications as have significant benefits and manageable side-effects and risks. However, such regimes need to move far beyond current, highly inadequate and largely ad hoc sets of 'ethical guidelines', and effectively operationalise substantive protections that encompass the full suite of 50 Principles for Responsible AI.


References

AHRC (2021) 'Human Rights and Technology Final Report' Australian Human Rights Commission, 2021, at https://humanrights.gov.au/our-work/rights-and-freedoms/publications/human-rights-and-technology-final-report-2021

Clarke R. (2017) 'Guidelines for the Responsible Application of Data Analytics' Computer Law & Security Review 34, 3 (May-Jun 2018) 467- 476, PrePrint at http://www.rogerclarke.com/EC/GDA.html

Clarke R. (2019a) 'Principles and Business Processes for Responsible AI' Computer Law & Security Review 35, 4 (July-August 2019) 410-422, PrePrint at http://www.rogerclarke.com/EC/AIP.html.
See also the current versions of The 50 Principles in HTML, and The 50 Principles in PDF

Clarke R. (2019b) 'The OECD's AI Guidelines of 22 May 2019: Evaluation against a Consolidated Set of 50 Principles' Xamax Consultancy Pty Ltd, May 2019, at http://www.rogerclarke.com/EC/AI-OECD-Eval.html

Clarke R. (2019c) 'The Australian Department of Industry's 'AI Ethics Principles' of September / November 2019: Evaluation against a Consolidated Set of 50 Principles' Xamax Consultancy Pty Ltd, November 2019, at http://www.rogerclarke.com/EC/AI-Aust19.html

Dickson B. (2018) 'What is the AI winter?' Techtalks, 12 November 2018, at https://bdtechtalks.com/2018/11/12/artificial-intelligence-winter-history/

EC (2019) 'Ethics Guidelines for Trustworthy AI' High-Level Expert Group on Artificial Intelligence, European Commission, April 2019, at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419

EC (2021) 'Proposal for a Regulation on a European approach for Artificial Intelligence' European Commission, 21 April 2021, at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=75788

* Primary Document (107 pp.) at https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF

* Annexes (16 pp.) at https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_2&format=PDF

* Press Release at https://ec.europa.eu/commission/presscorner/detail/en/IP_21_1682

Greenleaf G. (2021a) 'Global Tables of Data Privacy Laws and Bills' 7th Ed, January 2021, Privacy Laws & Business International Report 169 (February 2021) 6-19, at https://ssrn.com/abstract=3836261 or http://dx.doi.org/10.2139/ssrn.3836261

Greenleaf G. (2021b) 'The 'Brussels effect' of the EU's 'AI Act' on data privacy outside Europe' Privacy Laws & Business 171 (June 2021) 1-7, at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3898904

Veale M M. & Borgesius F.Z. (2021) 'Demystifying the Draft EU Artificial Intelligence Act' SocArXiv, 6 July 2021, at https://osf.io/preprints/socarxiv/38p5f/

Wachter S., Mittelstadt B. & Floridi L. (2017) 'Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation' International Data Privacy Law 7, 2 (May 2017) 76-99, at https://academic.oup.com/idpl/article/7/2/76/3860948


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 14 July 2021 - Last Amended: 31 August 2021 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/DV/EC21.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy