Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Evaluation of EC's AI Regulation'

Would the European Commission's Proposed Artificial Intelligence Act
Deliver the Necessary Protections?

Annex 1: High-Risk AI Systems, and The 50 Principles

It is an Annex to the article of the above name

Version of 31 August 2021

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2021

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/DV/EC21-Ann1.html


This Annex contains extracts from (EC 2021), interpretive comment, and [ cross-references to the corresponding elements of The 50 Principles enclosed within square-brackets ]

These provisions only apply to those High-Risk AI Systems that do not enjoy an exemption, including by virtue of any of Arts. 2.1(a), 2.1(b), 2.2, 2.3 and 2.4, but with the meaning of 2.2 intersecting with provisions in Art. 6 in ways that appear to be difficult to determine

Note also that the, possible many, exemptions of AI systems are exemptions-of-the-whole, that is to say that none of the provisions apply, and hence exempted high-risk AI systems are only subject to the non-regulation of level (4) All Other, plus the possibility of the very limited transparency requirement for 'level' (3) 'Certain AI Systems'

Risk Management System (Art. 9)

"A risk management system shall be established, implemented, documented and maintained ... " (Art. 9.1). A lengthy recipe is expressed, broadly reflecting contemporary risk assessment and risk management practices. The responsibility is implicitly addressed to providers and appears not to be addressed to users. It does not expressly require any testing in real-world contexts.

In addition, the Art. 9 provisions are subject to Art. 8.2, which heavily qualifies the provisions by making compliance with them relative to "the intended purpose". Not only is this sufficiently vague as to represent a very substantial loophole, it entirely ignores any other possible uses, including already-known actual uses, and their impacts and implications.

Fundamentally, however, the provision is misdirected. The notions of risk assessment and risk management are understood by organisations to adopt the perspective of the organisation itself, i.e. this is about 'risk-to-the-organisation assessment and management'. The notions of Multi-Stakeholder Risk Assessment and Management are not unknown. See, for example, s.3.3 of (Clarke 2019); but they remain at this stage a long way from mainstream understanding and use.

The term 'impact assessment' is commonly used when organisations are required to consider the dis/benefits and risks faced by users and usees of IT systems generally. The term 'impact assessment' is used 13 times in the EC Proposal, but it does not relate not to responsibilities of providers and users. All but one relates to the process conducted by the EC in developing the proposal, and the remaining one relates to Data Protection Impact Assessments under the GDPR. Moreover, the mentions of the term 'impact' in relation to affected individuals and fundamental rights are all in discursive prose or obligations on the EC itself. The term does not occur in passages that relate to risk assessment and management by providers or user organisations.

A few passages could be read as requiring examination of broader interests. The statement is made that "Risks ... should be calculated taking into account the impact on rights and safety" (EM3.1, p.8); but this is only in a report on the EC's own stakeholder consultations. A single relevant instance of the expression "risk to" appears in the EC's Proposal: "Title III contains specific rules for AI systems that create a high risk to the health and safety or fundamental rights of natural persons" (EM 5.2.3, p.13). An exception relates to impacts on children: "When implementing the risk management system ..., specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children" (Art. 9.8).

Nothing else in Art. 9 suggests to the reader that the work is to reflect the interests of multiple stakeholders, and in particular the interests of the affected parties. It is very difficult to believe that one sentence in the Explanatory Memorandum and a single children-specific clause would result in providers dramatically shifting their understanding of the scope of risk assessment and management. It would require a very broad reading of the EC Proposals by the courts for 'risk assessment and risk management' to be interpreted as requiring providers and user organisations to conduct 'impact assessments' on individuals affected by use of AI systems.

In short, if the objective was to protect the public, this is a highly ineffectual provision.

[ Art. 9 generally makes some contribution to P1.4 ("Conduct impact assessment, including risk assessment from all stakeholders' perspectives"), but only a small contribution because risk assessment is conducted from the perspective of the provider, perhaps with the interests of user organisations also in mind. An exception exists in relation to the reflecting the interests of children, by virtue of Art. 9.8. ]

[ Art. 9.4 corresponds to P4.4 ("Implement safeguards to avoid, prevent and mitigate negative impacts and implications"). ]

Data Governance and Management Practices (Art. 10)

"Appropriate data governance and management practices shall apply for the development of high-risk AI systems [generally]...[concerning in particular] (Art. 10.6, 10.2):

(a) "the relevant design choices;

(b) data collection;

(c) relevant data preparation processing operations, such as annotation, labelling, cleaning, enrichment and aggregation;

(d) the formulation of relevant assumptions, notably with respect to the information that the data are supposed to measure and represent;

(e) a prior assessment of the availability, quantity and suitability of the data sets that are needed;

(f) examination in view of possible biases;

(g) the identification of any possible data gaps or shortcomings, and how those gaps and shortcomings can be addressed".

[ Art. 10.2 articulates the first part of P7.2 ("Ensure data quality ..."), but does not address the second part ("Ensure .. data relevance"). It makes no contributions to P7.4 (re data security safeguards), nor to P7.3 ('justification for the use of sensitive data'). ]

[ Art. 10.2(f) fails to fulfil even the data-related aspects of P5.1 ("Be ... impartial ... avoid unfair discrimination and bias ...", because it requires only examination, and fails to actually require it be avoided. ]

[ When compared with the Guidelines for the Responsible Application of Data Analytics (Clarke 2017), the articulation in Art. 10.2 sub-sections is vague, and provides coverage of few of the 9 data-related Guidelines 2.2-2.8, 3.3, 3.5:

Some additional requirements are imposed on a sub-set of "high-risk AI systems", those that "make use of techniques involving the training of models with data" (Art. 10.1).

"Training, validation and testing data sets" are required to "be relevant, representative, free of errors and complete" and to "have the appropriate statistical properties" (Art. 10.3).

[ This further articulates the first part of P7.2 ("Ensure data quality ..."), and an element of G3.5 ("satisfy threshold tests ... in relation to data quality"). ]

"Training, validation and testing data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used" (Art. 10.4).

[ This addresses G2.1 ("Understand the real-world systems about which inferences are to be drawn and to which data analytics are to be applied"). ]

Art 10.5 contains a very complex 99-word sentence whose core expression appears to be "may process special categories of personal data ... subject to appropriate safeguards ... where anonymisation may significantly affect the purpose pursued". The effect appears to be to authorise, in some contexts, for some purposes, highly sensitive personal data to be used without anonymisation, and merely subject to some other, less effective protections ("security and privacy-preserving measures, such as pseudonymisation, or encryption"). This appears to be a designed-in loophole to override protections in other laws - and potentially a quite substantial loophole.

[ This appears to authorise what would otherwise be breaches of existing safeguards implementing P7.3 ('justification for the use of sensitive data') and G2.5 ("ensure ... de-identification (if the purpose is other than to draw inferences about individual entities)"). Because it provides an exemption, it also appears to be a negation of P1.8 ("Justify negative impacts on individuals ('proportionality')"). ]

Technical Documentation (Art. 11)

"[T]echnical documentation ... shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date [and] drawn up in such a way [as] to demonstrate that the high-risk AI system complies with the requirements set out in [Arts. 8-15]" (Art. 11.1)

[ This is merely an enabler of parts of P7 ("Embed quality assurance"). It addresses a small part of P10.2 ("Comply with [regulatory processes] ..."). ]

Record-Keeping (Art. 12)

"High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (`logs') ... [conformant with] recognised standards or common specifications ... [ensuring an appropriate] level of traceability ... " (Arts. 12.1-3). Cross-references to Arts. 61 and 65 appear to add nothing to the scope or effectiveness of the provision.

For remote biometric identification AI systems, some specific capabilities are listed, generally consistent with logging conventions (Art. 12.4).

[ This implements only a small element within P6.2, which states " Ensure that data provenance, and the means whereby inferences are drawn, decisions are made, and actions are taken, are logged and can be reconstructed". Instead of "means", the EC Proposal requires only "events", and it omits the vital criterion: that the inferencing and decision-making processes can be reconstructed.

[ The provision appears not to make any contribution to G4.6, which requires that "mechanisms exist whereby stakeholders can access information about, and if appropriate complain about and dispute interpretations, inferences, decisions and actions", because it is expressed in the passive voice, and the only apparent rights of access are by the provider themselves and, under limited circumstances explained in Art. 23 and lengthily in Art. 65, the relevant national competent authority and/or national market surveillance authority. ]

Transparency to Users (Art. 13)

"High-risk AI systems shall be designed and developed in such a way [as] to ensure that their operation is sufficiently transparent to enable users to interpret the system's output and use it appropriately" (Art. 13.1).

"High-risk AI systems shall be accompanied by instructions for use ...", including "its intended purpose" and its risk profile (Arts. 13.2-3). Nothing in the provision appears to require the user to apply the instructions for use, or constrain the user to only use it for the system's "intended purpose".

[ Art. 13.1 makes a limited contribution to P6.2 ("Ensure that ... the means whereby inferences are drawn ... can be reconstructed "). The contribution is very limited in that transparency is limited to the user organisation and no other stakeholders, and "sufficiently transparent" is far less than a requirement that "the means whereby inferences are drawn [and] decisions made" be communicated even to the user organisation.

[ Similarly, and very importantly, it fails to require the provider to enable the user organisation to comply with G4.9: "the rationale for each inference is [to be] readily available to [those affected] in humanly-understandable terms". ]

[ Arts. 13.2-3 merely facilitate communication of information along the supply-chain. They enable the possibility of the user organisation applying the tool in a manner that manages the risks to the people affected by its use, but in themselves the provisions do nothing to ensure responsible use of AI systems in the terms of The 50 Principles.]

[ The provision of information makes a contribution to the capability of the user organisation to fulfil G3.2: "Understand the ... nature and limitations of data analytic tools that are considered for use". ]

Human Oversight (Art. 14)

"High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons" (Art. 14.1). The term "effectively overseen" may (but may not) encompass detection of "anomalies, dysfunctions and unexpected performance" (14.4(a)), awareness of the possibility of "automation bias" (14.4(b)), an unclear expression 'correct interpretation of the output' (14.4(c)), and an ability "to decide ... not to use the high-risk AI system or [to] otherwise disregard, override or reverse the output" (14.4(d)) - but without any obligation to ever take advantage of that ability.

[ These provisions are enablers for P3.1 ("Ensure human control over AI-based artefacts and systems"), but whereas the inclusion of such features is a (qualified) obligation of providers, this provision alone does nothing to ensure the features are effectively designed, effectively communicated to user organisations and thence to users, and applied, appropriately or even at all. ]

The possible need for an "ability to intervene ... or interrupt through a 'stop' button or a similar procedure" (Art. 14.4) - again without any obligation to ever apply it - appears to contemplate automated decision-making and action. This is in contrast to the apparent exclusion of robotic action from scope, noted in section 4.1 as arising from the failure of the Art. 3(1) definition of AI system to go beyond "[data] outputs" to encompass actions in the real world.

Art. 14.5 applies solely to "AI systems intended to be used for the 'real-time' and 'post' remote biometric identification of natural persons". It appears to state that measures required by Art. 14.3 in those cases are to "ensure that ... no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been verified and confirmed by at least two natural persons". It is unclear what the expression "the identification resulting from the system" is intended to mean, let alone what it will be taken to mean by the many people who are expected to read and apply the provisions and/or other people's interpretations of the provisions.

[ These provisions may make small contributions to P3.1 ("Ensure human control over AI-based artefacts and systems") and P3.2 ("ensure human control over autonomous behaviour"). One reason that the contributions are so limited is that they only relate to the existence of means of human oversight, and do not actually require user organisations to apply those means. Another is that the poor standard of the drafting results in an absence of clarity about what, if anything, this Article requires be done in particular circumstances.

[ Art. 14.5, which is applicable to only 1 of the 20 forms of 'high-risk AI system', fails to impose any responsibility on user organisations relating to human review. So these provisions do not satisfy P3.5 ("Ensure human review of inferences and decisions prior to acting on them"). (Art. 29 relating to user responsibilities is addressed below). ]

Accuracy, Robustness and Cybersecurity (Art. 15)

"High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity ..." (Art. 15.1) and are to be "resilient as regards errors, faults or inconsistencies" (Art. 15.3).

[ These provisions appear to be based on inadequate understanding of the terms used. Accuracy is an attribute of data rather than of inferences or decisions, for which the criteria are validity and reasonableness. The term 'robustness' refers to the ability to function effectively despite environmental perturbance, and 'resilience' refers to the capacity to recover, and/or the rapid achievement of recovery, from loss of service as a result of environmental perturbance. The vague term 'cybersecurity' may refer to assurance of service, or to any, some or all of assurance of sustained quality of service or of data, or to assurance of access to data only by authorised entities for authorised purposes.

[ These Articles could be interpreted to mean a very wide range of things, but in practice can be largely ignored by providers because they are not sufficiently coherent to contribute to any of P6 requirements ("Embed quality assurance"), nor even the P8 requirements ("Exhibit robustness and resilience"). ]

"High-risk AI systems that continue to learn ... shall be developed in such a way [as] to ensure that possibly biased outputs due to outputs used as an input for future operations (`feedback loops') are duly addressed with appropriate mitigation measures" (Art. 15.3).

[ The sentiment expressed is positive, but the expression is so clumsy and unclear, and the scope so limited, that ample excuse exists for ignoring the requirement on the basis that it is unclear how it could be operationalised. It accordingly contributes only a little to P4.4 ("Implement safeguards to avoid, prevent and mitigate negative impacts and implications"), P7.1 ("Ensure effective, efficient and adaptive performance of intended functions"), and P7.6 ("Ensure that inferences are not drawn from data using invalid or unvalidated techniques"). It similarly offers little in relation to G3.2 and G3.4. ]

Procedural Provisions Imposed on Providers (Arts. 16-28)

A range of procedural obligations are imposed on providers, including a small number that appear to be additional to the limited requirements noted above.

Providers are required to have a quality management system in place, and it is to include "examination, test and validation procedures to be carried out before, during and after the development of the high-risk AI system" (Art. 17.1(d)).

However, the quality of the quality management system is qualified, in that it "shall be proportionate to the size of the provider's organisation" (Art. 17.2). This creates a substantial loophole whereby arrangements can be made for risk-prone AI systems to be provided by small organisations with less substantial obligations than larger providers.

[This contributes to P7 ("Embed quality assurance"), but generally without articulation. It does address P7.7 ("Test result validity"), but without expressly requiring that problems that are detected are addressed, and it is subject to a vague qualification in relation to the operator's size. ]

"Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system which they have placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary corrective actions to bring that system into conformity, to withdraw it or to recall it, as appropriate" (Art. 21).

However, the provision does not make clear who determines whether an AI system is non-compliant, nor how and to what extent the requirement is enforceable.

[ The provision makes no material contribution to P9.1 ("Ensure that the responsible entity is apparent or can be readily discovered by any party"), because none of the parties affected by the AI system, and no regulatory agencies, appear to be involved. Nor does it contribute in any way to P9.2 ("Ensure that effective remedies exist, in the form of complaints processes, appeals processes, and redress where harmful errors have occurred"). ]

Moreover, the provider is absolved of all responsibilities, despite the fact that the system may continue to be used for the original "intended Purpose" as well as the 'modified intended purpose' (Art. 28.2).

[ This appears to seriously compromise such protections as were afforded by the provisions noted above. ]

A Provider's Responsibilities May Shift to Users (Art. 28)

"[A] user ... shall be subject to the obligations of the provider under Article 16 [if] they modify the intended purpose of a high-risk AI system ... [or] make a substantial modification to the high-risk AI system" (Art. 28.1).

The effect of Art. 16 appears to be to apply many of the above obligations, arising from Arts. 8-26. However, the drafting quality is deplorable, and years of litigation would be necessary to achieve reasonable clarity about what Art. 16 does and does not require of user organisations.

[ To the extent that the user organisation has a less distant relationship with the individuals affected by the use of the AI system (although not necessarily a close one), this may improve the contribution of some of the provisions to some of The 50 Principles and some of the Guidelines. However, the scope for user organisations to dodge responsibilities is enormous, even greater than that afforded to providers. ]

User Responsibilities (Art. 29)

"Users of high-risk AI systems shall use such systems in accordance with the instructions of [sic: should be 'for'?] use accompanying the systems, pursuant to paragraphs 2 and 5" (Art. 29.1).

[ Multiple weaknesses in the Art. 13 provisions in relation to 'instructions for use' were noted earlier. These weaknesses are compounded by the failure to impose any obligations on the user organisation in relation to transparency to those affected by use of the AI system. This provision accordingly appears to add little to the paltry protections contained in the earlier Articles. ]

"[T]o the extent the user exercises control over the input data, that user shall ensure that input data is relevant in view of the intended purpose of the high-risk AI system" (Art. 29.3).

[ Superficially, this seems to make a contribution to the second part of P7.2 ("Ensure ... data relevance"). However, it creates a massive loophole by permitting a user organisation to avoid responsibility for only using data that is relevant, on the illogical basis that they don't 'exercise control' over that data. The text is vague, and hence fulfilment of the condition is likely to be very easy to contrive. So the EC Proposal permits irrelevant data to cause harm to affected individuals, without any entity being liable for that harm. ]

User organisations are to monitor and report on risks as defined in Art. 65(1) (Art. 29.4). However, the scope of the designated EU Directive, and hence of Art. 65(1), appears to be limited to "harm" to "health and safety", neither of which appears to be defined in that Directive.

User organisations are also to report on "any serious incident or any malfunctioning ... which constitutes a breach of obligations ... ", as per Art. 62 (Art. 29.4). The scope of this reporting obligation is anything but clear, as is its enforceability.

[ It is not clear that these provisions make a material contribution to P9.1 ("Ensure that the responsible entity is apparent or can be readily discovered by any party"), because none of the parties affected by the AI system are involved. Nor does it contribute in any way to P9.2 ("Ensure that effective remedies exist, in the form of complaints processes, appeals processes, and redress where harmful errors have occurred"). ]

"Users of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system, to the extent such logs are under their control" (Art. 29.5).

[ The corresponding obligation imposed on providers was noted as being inadequate. The imposition on user organisations makes even less contribution to P6.2 ("Ensure that the means whereby inferences are drawn, decisions made and actions are taken are logged and can be reconstructed"). That is because it enables the user organisation to absolve themselves of any responsibility to have control over logs. ]

Machinery Provisions (Arts. 30-51)

Arts 30-51 specify eurocratic processes for the processing of notifications, standards, conformity assessment, certificates and registration. They are machinery provisions, and do not appear to not contain any substantive contributions to protections.

Governance and Enforcement (Arts. 56-68)

Arts 56-68 and 71-72 specify governance arrangements.

"Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system" (Art. 61). Providers are to provide access to market surveillance authorities under specified conditions (Art. 64). The market surveillance authority can require a provider to "withdraw the AI system from the market or to recall it ..." (Arts. 67-68). Obligations under Arts. 62 and 65.1 were referred to above (because they were referenced by Art. 29.4).

Art. 71.1 requires that there be "[effective, proportionate, and dissuasive] penalties ... [and] all measures necessary to ensure that they are properly and effectively implemented".

[ The creation of the possibility that some relevant parties may become aware of who the responsible entity is represents only a small contribution to P9.1 ("Ensure that the responsible entity is apparent or can be readily discovered by any party"). ]

[ Arts. 63--68 and 71-72 make a more substantial contribution to P9.2 ("Ensure that effective remedies exist ..."), but affected individuals and their advocates are excluded from the scheme. ]

[ The provisions within the EC Proposal do not appear to satisfy P10.1 ("Ensure that complaints, appeals and redress processes operate effectively"). ]

[ The contribution to P10.2 ("Comply with processes") is only modest. The market surveillance authority has powers, but the regime appears to create no scope for enforcement of any aspects of the EC Proposals by individuals or by public interest advocacy organisations.

____________

A generic issue that arises is that the provisions are generally procedural in nature, not outcome-based. They read more like a generalised International Standards Organisation document written by industry, for industry, than a regulatory instrument.


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 14 July 2021 - Last Amended: 31 August 2021 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/DV/EC21-Ann1.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy