Roger Clarke's Web-Site© Xamax Consultancy Pty Ltd, 1995-2025 |
![]() |
|||||
HOME | eBusiness |
Information Infrastructure |
Dataveillance & Privacy |
Identity Matters | Other Topics | |
What's New |
Waltzing Matilda | Advanced Site-Search |
Review Draft of 3 May 2025
© Xamax Consultancy Pty Ltd, 2025
Available under an AEShareNet licence or a Creative
Commons
licence.
This document is at http://rogerclarke.com/EC/RRE-AIA.html
The Artificial Intelligence Act (AIA) of the European Union has been portrayed as a world-leading regulatory regime that will protect the public against the technological threats inherent in AI, and encourage adoption of beneficial applications. A great deal of analysis has been published since the law was proposed in 2021. This has continued following its passage in 2024, and the coming into force of the first of its provisions in 2025. This contribution evaluates the AIA not as law, but as the underpinnings of a regulatory regime -- or, more specifically, of four separate regimes for different categories of application.
Legal analyses have identified complexities arising from the application of multiple bodies of law in unusual ways. The first contribution of this article is to identify strangenesses also in the key terms and their definitions, which represent barriers to understanding by AI practitioners. A second contribution is examination of the four regulatory regimes as interventions into complex socio-technical systems, resulting in the identification of a great many ambiguities and exceptions likely to undermine attempts to achieve compliance, to reduce the incidence of negative impacts and implications, and to manage risks.
The work's third contribution is the application of an evaluation framework for regulatory regimes that provides a comprehensive view comprising 16 criteria for an efficacious scheme. The scoresheets provide a quantitative indicator of the extent to which all of the AIA's regulatory regimes fail against those criteria. The factor-weightings and scores are of course dependent on subjective judgements. On the other hand, the provision of a considerable degree of structure to the assessment of regime performance against defined criteria enables others to refine the analysis, or to conduct their own evaluations against the standard, or against an enhanced or alternative version of the criterion-set. The results of this and subsequent evaluations should assist in the emergence of some future scheme that, unlike the AIA, will succeed in bringing order to the current chaos of AI applications.
Artificial Iintelligence (AI) technologies are widely acknowledged to have substantial negative impacts and implications and to carry risks (Blauth et al. 2022). Widespread concern exists about AI's impacts, and there is demand worldwide for regulatory action (Gillespie et al. 2025). The EU Artificial Intelligence Act (AIA) is claimed to be the first substantive endeavour to regulate applications of AI technologies, and is progressively coming into force between March 2024 and August 2027..The AIA seeks to encourage development, deployment and use of Artificial Intelligence (AI), in order to extract economic benefits from it. Many writers see the AIA as being likely to exhibit the 'Brussels Effect' (Bradford 2020), by influencing the practice of AI well beyond the European Union (Greenleaf 2021, 2024) -- although some conclude differently (e.g. Ebers 2024 pp.18-20).
The AIA has been the subject of a vast amount of analysis since its path towards enactment began in April 2021 (EC 2021). It is large and complex, and contains about 90,000 words in 100 pp. of dense text. The Recital alone contains 34,500 words in 180 numbered paragraphs over 31 pp. This has since been augmented by guidance totalling a further 135pp. and 60,000 words, relating to only the first provisions that came into effect (EC 2025). There are 80,000 words in the Torah, 64,000 in the Gospels, and 77,000 in the Quran. In its own way, the AIA may give rise to as rich a diversity of interpretations as those Holy Books.
The AIA uses a number of key terms in novel ways. The phrase 'AI System' occurs 1,080 times, 'risk' qualified in several ways occurs 776 times, 'high-risk' 472 times and 'systemic risk(s)' 80 times. The usages of the key terms, combined with the richness of their contexts of use in the document, present to regulatees as a blizzard of words, and provide researchers and consultants alike with an intellectual banquet on which to feast.
The analysis presented in this article is conducted from the perspective of a consultant and researcher in strategic and policy aspects of transformative and disruptive information technologies. It reflects insights in the substantial prior literature in the field of technology law, including early analyses such as Edwards (2022), optimistic ones such as Gstrein et al. (2024) and Cancela-Outeda (2024), and particularly critical works in the business-oriented information systems literature such as Vainionpaa et al. (2023) and Woersdoerfer M. (2024), and in law, such as Veale & Borgesius (2021) and Wachter (2024).
The work reported here is intended to complement the many deeper analyses of specific aspects of the Act. The analysis does not consider the AIA as law, but rather as the underpinnings of a regulatory regime. It applies a framework for the evaluation of regulatory regimes which is presented in detail in the companion article (Clarke 2025b). Its focus is on key factors that are likely to determine the overall efficacy of the AIA.
The article commences by outlining the AIA, and drawing attention to aspects of it that are of significance to an assessment of its efficacy as a regulatory instrument. Section 3 identifies and describes five categories of objects, of which four are subject to separate regulatory regimes of varying intensity. In each case, a wide range of uncertainties and exceptions are noted, and questions also arise in relation to enforcement arrangements. In section 4, an evaluation framework for regulatory regimes is outlined, by reference to a companion article. Section 5 then applies the the evaluation framework to each AIA regulatory regime in turn, enabling subjective scores to be assigned against each of 16 criteria. The score-sheets bring into focus the extent to which the uncertainties and exceptions undermine the efficacy of the AIA's provisions. Conclusions are drawn in section 6.
The AIA was prepared during 2021-24 by the European Commission (EC, which is the bureaucracy of over 30,000 public service employees), passed in March 2024 by the EU Parliament (which represents citizens), and adopted in May 2024 by the Council of the EU (which represents the governments of the member states). The AIA is formally an EU Regulation, 2024/1689, and hence is a law that applies to and within all member states, as part of each member state's law. (EU Directives, on the other hand, set goals for member states to implement). The AIA became EU law on 1 August 2024, with various elements taking effect from 2 February 2025, 2 August 2025, 2 August 2026 and 2 August 2027 (Article 113).
The description in this and the following section draws on the AIA and the EU's high-level summary (EC 2024), which is the source of those quotations below for which express citations are not provided. Reference has also been had to a variety of secondary sources in the refereed literature, which are cited where they are relied upon. The remainder of this section outlines the AIA's purposes and discusses definitional aspects of the key terms it uses. The following section 3 shifts the focus to the regulatory regimes it establishes.
AIA's objective is declared as (AIA, Recital (1), emphases added, and reformatted for clarity):
The purpose of this Regulation is
The primary motivation is therefore declared as being the stimulation of economic development by supporting AI innovation and promoting its uptake. Control over negative impacts, implications and risks is expressed firstly as a constraint on the economic objective ("while ensuring") and then as an apparently secondary objective ("to protect"). Hence, although the large majority of the provisions relate to protections, the ultimate purpose of those protections is to overcome barriers to adoption of AI technologies. Veale & Borgesius (2021) note the strangeness of this approach (p.98):
"The proposal mixes reduction of trade barriers with broad fundamental rights concerns in a structure unfamiliar to many information lawyers ... [which] brings a range of novelties and tensions" (p.98)
On the other hand, whereas many regulatory regimes focus on the achievement of public trust (i.e. by whatever means), the AIA contains few mentions of 'trust' and 19 of 'trustworthiness', implying an endeavour to earn public trust rather than relying largely on 'public education', in the sense of public relations and marketing.
The drafters avoided having to define AI, and instead chose as the focal-point systems that they claim have characteristics that together define the features that make 'AI' a suitable matter for the imposition of regulatory measures. An "AI system" means (Art. 3(1)):
a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments
The argument can be advanced that AI means too many things to too many people and hence skirting around its meaning is an appropriate strategy. On the other hand, AI researchers, developers, vendors and adopters need to understand the AIA's technological scope in order to gauge the extent to which their technology has the attributes that the definition of "AI system" refers to. Achieving a reliable interpretation turns out to be quite challenging. Multiple features are optional, including 'a high level of autonomy' ("varying levels"), 'adaptiveness' ("may exhibit"), 'explicit objectives' ("explicit or implicit") and any particular form(s) of 'output' ("such as"). The remaining, definitional attributes appear to be (i) 'machine-based', (ii) '(some) autonomy', (iii) 'objectives' and (iv) 'inference from input to generate output'.
It would seem that 'machine' is not intended in its original sense (paraphrasing OED IV.6.b) of 'a device with multiple parts with defined functions, involving mechanical or electrical power in the performance of defined work', but rather a device utilising electrical phenomena for the handling of data, without any requirement that force be applied to any real-world object, i.e. a computer. The following interpretation is therefore suggested as the scope of "AI system" in the AIA, operationally defined:
a computer-based set of interacting processes that has some level of autonomy and some sense of objectives, and that draws inference(s) from input to generate output
A first important consideration is that the notion of 'inference' does not depend on the involvement of a high-level intelligence. All that matters is that input-conditions exist that give rise to an output. The process whereby the output is generated can be expressed in various ways, including algorithmic or process form, or as antecedent-consequent / logical rules. The various generations of software development tool are described in Clarke (1991). Software developed using each of a machine-language, an assembler language, a procedural language, a rule-based expert system, and a purely empirical approach such as artificial neural networks, infers from input to generate output, i.e. each satisfies definitional attribute (iv). Any such item of software can be interpreted as having at least implicit objectives (iii). Each has a delegation to perform its pre-programmed functions and hence operates with a (perhaps low) level of autonomy (ii), and runs in one or more computing devices and is thereby machine-based (i). The AIA definition of an AI system is therefore not limited to any particular generation of development tool and requires no particular attributes of coding techniques or outcomes that were not already apparent in (at the very latest) the first administrative system (bakery valuations) in November 1951 (Land 2022), which first ran some years prior to the OED recognised the terms 'artificial intelligence' (1955) and 'AI' (1963).
The definition of AI system adopted in the AIA can therefore be argued to fail the test that the Commission set for it, viz.: "the definition should be based on key characteristics of AI systems that distinguish it from simpler traditional software systems or programming approaches and should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations" (AIA 2024, Recital (12)). Ebers (2024) similarly concludes "the AI Act applies not only to machine learning, but also to logic- and knowledge-based approaches (recital 12 AI Act). As a result, even deterministic software systems [may be] subject to the highest requirements" (p.12).
This stands in contrast with past and present understanding among AI practitioners of what the term AI means. Original-AI can be reasonably depicted, paraphrasing McCarthy et al. (1955, p.12, 1st para.) as:
accurate simulation on a computer of all aspects of learning and [human] intelligence more generally
AIA's definition of an "AI system" incorporates aspects of computing, autonomy, objectives, inferencing, outputs, decisions, and learning (a related notion to adaptability), but lacks any sense of a simulation of the integral whole implied by the term 'intelligence'.
McCarthy's original conception of AI has come to be referred to as 'artificial general intelligence' or 'strong AI', because it is recognised as 'aspirational', or, less charitably, motivational but unachievable. The large majority of work in the field has long since adopted the approach that human features that contribute to intelligence are 'inspirational' (Boden 2016, Lieto & Radicioni 2016). An exemplar of this is the metaphorical use of '(artificial) neural networks' to refer to the most common base for the branch of AI called machine learning (AI/ML). The following was proposed in Clarke (2023) as an operational definition of AI as practised in recent decades, paraphrasing multiple sources, including Albus (1991), Russell & Norvig (2003) and McCarthy (2007):
Intelligence is exhibited by an artefact if it:
The sense in which AIA's definition uses 'AI' is far removed from this more contemporary interpretation. AIA requires only weak forms of perception and goal-driven formulation of action, no element at all of cognition, and autonomy at most up to the point of decision, not action. The AIA notion is therefore both much narrower than conventional use of the term (as discussed immediately above), and much broader (as discussed earlier in this section).
These significant differences between the AIA notion and the conception common among researchers, developers, vendors and adopters of contemporary AI creates a strong likelihood that the provisions of the statute will map poorly to the realities of the relevant artefacts' features. This makes it very likely that the legal requirements will not be comprehensible to the individuals who are intended to comply with them. The fact that there is a great deal of commonality between the AIA's definition and those of the Organisation for Economic Cooperation and Development (OECD 2024, p.4) and the Council of Europe (CoE 2024, p.3) suggests that this incomprehensibility problem may be becoming entrenched.
Generative AI (GenAI) burst into public prominence during 2022-23, during the middle part of the gestation and negotiation period for AIA, 2021-24. The EC felt the need to react swiftly, and, despite the fluidity of an immature technology, sought to ensure that a particular aspect of GenAI was addressed by the Act. The AIA defines the term "general-purpose AI (GPAI) system" to mean (Art.3(66)):
an AI system which is based on a general purpose AI model [GPAI model], that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems
"GPAI model" means (Art.3(63)):
an AI model, including when trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable to competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities
A further term, 'GPAI model with systemic risk' is discussed in section 2.5 below.
The very apparent desire for a 'black-box' / 'technologically neutral' definition has resulted in highly non-specific, even vacuous, expression, which invites an enormous range of interpretations. GenAI artefacts comprise deft combinations of Natural Language Understanding (NLU), Natural Language Generation (NLG) and a human-computer interface (currently referred to as a 'chatbot'), together with the key element of a Large Language Model (LLM) (Karanikolas et al. 2023, Clarke 2025). Given the immaturity of LLMs, of their combination with other technologies, and of their applications, there has to be considerable doubt about whether this approach can provide the basis for an efficacious regulatory regime.
AIA encompasses five categories of "AI systems", four of which are distinguished on the basis of what it refers to as their level of "risk". It specifies regulatory regimes in relation to four of the five categories, exempting from regulation what are referred to in this article as "Minimal risk AI systems". The AIA uses "Risk" in a very particular way, to mean (Art.3(2)):
the combination of the probability of an occurrence of harm and the severity of that harm
In the contexts of security, risk assessment and risk management, the term 'risk' is used in a wide variety of ways, referring variously to a general threat, a specific threat, an incident, harm, the likelihood of harm arising from an incident, or the residual likelihood of harm arising taking into account existing safeguards. Most coherently, it refers to :
the perceived likelihood of occurrence of harm arising to an asset as a result of a threatening event impinging on a vulnerability (Clarke (2015)
the likelihood that a source of hazard will turn into actual harm (Ebers 2024, p.3)
However, the US NIST Glossary entry recognises the blurring of the 'likelihood' criterion through some unclear merger with the notion of harm or 'adverse impacts' (NIST 2025):
A measure of the extent to which an entity is threatened by a potential circumstance or event, and typically a function of: (i) the adverse impacts that would arise if the circumstance or event occurs; and (ii) the likelihood of occurrence
It is noteworthy that the AIA's definition goes beyond all of these approaches by incorporating the novel and undefined criterion of 'the severity of harm'. It appears likely that the AIA notion of Risk will require a succession of judicial pronouncements before its meaning and application achieve adequate clarity. The AIA approach to risk assessment is critiqued in Novelli et al. (2024).
The definition of one form of the fifth of the five categories addressed by the AIA, GPAI models, uses the novel term "systemic risk". AIA defines "systemic risk" as being (Art.3(65), emphasis added):
a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain
The term "GPAI model with systemic risk" is elsewhere defined as a GPAI model that either (Art.51-1, emphasis added):
(a) ... has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including indicators and benchmarks; or
(b) [is] based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in Annex XIII.
An attempt is made to operationalise the definition of 'high impact capabilities' in terms of the computational power used (Art.51-2), with the threshold measure adaptable by regulatory instrument (Art.51-3, emphasis added):
A general-purpose AI model shall be presumed to have high impact capabilities ... when the cumulative amount of computation used for its training measured in floating point operations is greater than 10^25
The lengthy Recitals section within the AIA provides further explanation of "systemic risks" (plural), saying that they "include, but are not limited to, any actual or reasonably foreseeable negative effects in relation to major accidents, disruptions of critical sectors and serious consequences to public health and safety; any actual or reasonably foreseeable negative effects on democratic processes, public and economic security; the dissemination of illegal, false, or discriminatory content" (Recital 110), and "a general-purpose AI model should be considered to present systemic risks if it has high-impact capabilities, evaluated on the basis of appropriate technical tools and methodologies, or significant impact on the internal market due to its reach. High-impact capabilities in general-purpose AI models means capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models" (Recital 111, emphasis added). The circular nature of the final words of Recital 111 (effectively limiting it to the 'mostest advanced GPAI model(s)'?) creates challenges to understanding. A parallel endeavour to disentangle the semantic complexities of 'GPAI with systemic risks' is in Bygrave & Schmidt (2025, pp.8-10). The scope for diverse interpretations and uncertainties appears enormous.
The high-level summary (EC 2024) says that the AIA places "the majority of obligations" on "providers (developers)". In normal usage, the terms 'provider' and 'developer' are distinct: a developer performs functions early in a supply-chain, whereas a provider adds value further along that chain.. The confusion is deepened by the reference to "Those that intend to place on the market [which, in more common business dialect, appears to encompass developers, product-providers, re-sellers and service-providers] or put into service [which appears to encompass service-providers and "users (deployers)"] such that the "output is used in the EU" [irrespective of whether the "provider" is in the EU or elsewhere]. The occurrences in the summary of "user" appear to encompass firstly organisations, and secondly individuals applying the AI system in a self-employed professional capacity, i.e. as an independent contractor, sole trader or business partner. The term "end-user" (which appears only twice in each of the AIA and the EC's high-level summary) appears to apply to employees and contractors acting on behalf of an organisation, and individuals acting in a personal capacity.
More specifically, the categories of the entities on which the AIA imposes obligations are:
a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge
a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity
a provider, product manufacturer, deployer, authorised representative, importer or distributor
The subsidiary terms "authorised representative", "importer" and "distributor" (but not "product manufacturer") are defined in conventional manner in Art.3(5)-(7). The Provider's "intended purpose" is defined, and distinguished from "reasonably foreseeable misuse" defined as use "not in accordance with its intended purpose" (Art.3(12)-(13)). Thereafter, the term "intended purpose" occurs many times throughout the Articles, whereas 'misuse' appears only in Arts.9, 10, 13 and 14. The few occurrences in the AIA Articles of "user" and "end-user" appear to impose no obligations on such an entity. This has since been confirmed (EC 2025, p.6, para. (18)).
Some uncertainty may arise about the scope of these definitions, and whether all categories of actor are encompassed. After a "provider" has "developed" a product and "placed it on the market", the product may be transacted through chains of organisations before being "used" by a "deployer" for an "intended purpose" and/or a "misuse", or solely for service-provision to other deployers. Clarity is needed about the obligations of, respectively, each organisation in supply chains or networks, and those "deployers" who make the "AI system" available only to "users" within their own organisation and/or "end-users" within or beyond its boundaries.
The following section describes the regulatory regimes created by the AIA, referring back to the definitional discussions in the present section where appropriate.
As outlined in Table 1, and discussed in sections 3.1-3.3 and 5.1-5.4 below, the AIA distinguishes four categories of "AI systems", depending on their level of "Risk", and claims to create distinct regulatory regimes for three of them.
A fifth category, and the fourth that is subject to a regulatory regime, is a "General purpose AI (GPAI) system", as defined in the previous section. This is addressed in sections 3.4 and 5.5. Enforcement aspects are considered in section 3.5.
"Unacceptable-Risk / Prohibited AI Systems" are defined in a closed-ended manner (Art.5). The eight items describe applications of "AI systems" rather than the characteristics of the artefact, or the process or technique(s) used to achieve an output or outcome. The Article contains 1,750 words in text of a moderate degree of complexity.
The eight items are subject to over 20 exclusions, based variously on the intended outcomes; the extent to which harm is caused; the attributes of the usees; the data involved; the source of the data; or the organisation performing the function or in which the function is performed. See Appendix 1.
The scope of exclusions is unclear and may well remain so, but appears to be very broad. For example, to be Prohibited under each of item (1) deception and (2) vulnerability exploitation, a very high bar is set, because an AI system must both "distort behaviour", and "cause" "significant harm". Similarly, (3) biometric categorisation only applies to a closed-ended list of "inferred sensitive attributes", and even then not if the biometric dataset was "lawfully acquired" (which would doubtless be claimed to be the case with most declared uses), and in all cases where the use is (by? for?) "law enforcement". Use for (5), which appears to define 'predictive policing', is permitted if it "augments" "human assessment".
Item (6) "compiling facial recognition databases is permitted, provided that it is "untargeted" and from any source other than "the internet" (which is not a source but a means of communication with sources) or "CCTV footage". Item (7) "inferring emotions" is permitted in most contexts, and even in "workplaces and educational institutions" if "for medical or safety reasons". Item (8) "'real-time' remote biometric identification (RBI)" appears to be permitted, both generally and for all law enforcement activities in relation to serious safety matters and serious crime. Legal advisors and consultants are obliged to draw the vast array of disqualifying conditions to the attention of their clients, and to assist them in the event that they seek to escape the prohibition by utilising any of the exceptions.
Neuwirth (2023), considering an early version of the AIA's prohibition provisions, drew attention to both their importance and the enormous challenges involved in defining boundaries and achieving desired regulatory effects. The Article came into effect on 2 February 2025. A very substantial guidance document was published 2 days later, still in draft (EC 2025). This appears to generally confirm the impression of the exceptions having very broad scope (e.g. PW 2025). It is far from clear that Art.5 will result in many activities ceasing or not proceeding through design to deployment.
Even if an AI system escapes prohibition, it may be a "High-Risk / Regulated AI System". This is also defined in terms of applications of "AI systems" rather than the characteristics of the artefact or the process or technique(s) used to achieve an output or outcome. The included AI system applications fall under two headings, expanded on in Appendix 2:
Four generic exceptions are declared, in relation to "a narrow procedural task", "improv[ing] the result of a previously completed human activity", adjuncts to "a completed human assessment" and "[being] a preparatory task to an assessment" that is relevant to an Annex III use-case (Art.6.3).
In addition to the four generic exceptions, the eight categories defined in Annex III are subject to a dozen exceptions within their detailed descriptions, and in some cases closed-ended lists. Lists are likely to have the effect of excluding unmentioned sub-categories of applications. Some forms of exception, such as "except when detecting financial fraud", create the possibility of 'laundering' systems by including that feature. The 'predictive policing' provision is very difficult to comprehend, which creates the risk it will be ignored.
There are four further, substantial and largely uncontrolled exceptions in the second bullet, supplemented by heavy qualification in the fourth bullet, whereby "providers" are authorised to self-assess whether any particular AI system is "high-risk", subject to the uncontrolled proviso "must document such an assessment before placing it on the market or putting it into service".
On the other hand, "an AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons" (Art.6.3). For this purpose, "profiling" means "profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679", commonly known as the General Data Protection Regulation (GDPR) (Art.3(52)). The GDPR definition of "Profiling" is (emphasis added):
any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements
This corresponds to the preparation and use of a model (a 'digital persona') associated with a particular entity or identity (Clarke 1994, 2014):
[A digital persona] is a model of [a particular] individual's public personality based on data and maintained by transactions, and intended for use as a proxy for [that] individual
This is quite different from another longstanding use of the term 'profiling' to refer to the generation and use of an abstract model of the key characteristics of a category of identities, such as a 'drug-mule', or a student who would benefit from a particular form of intervention (Clarke 1993, emphasis added)
[Profiling is] a technique whereby a set of characteristics of a particular class of person is inferred from past experience, and data-holdings are then searched for individuals with a close fit to that set of characteristics
The unequivocal nature of the stipulation that 'all Annex III AI systems that perform profiling are High-Risk' is challenging to interpret. Of the eight categories in Annex III, only '2. Critical infrastructure AI systems' appear unlikely to perform profiling to 'evaluate aspects of a natural person'; whereas it appears that the other seven generally do so. The question arises as to whether the stipulation that they are automatically High-Risk overrides all, some, or even any, of the many exceptions built into the Annex III descriptions.
For those "AI systems" for which an escape-route does not exist, providers are subject to requirements defined in Arts.8-39. AIA Chapter III (High-Risk AI Systems) Section 2 (Requirements for high-risk Al systems) is written in a manner that creates interpretation challenges from regulatory and compliance perspectives. It comprises eight Articles, commencing with an introduction (Art.8), followed by substantive requirements in relation to a Risk management system (Art.9), Data and data governance (Art.10), Technical documentation (Art.11), Record-keeping (Art.12), Transparency and provision of information to deployers (Art.13), Human oversight (Art.14), and Accuracy, robustness and cyber-security (Art.15). Art.8.1 declares that "High-risk AI systems shall comply with the requirements laid down in this Section", but the clause uses the passive voice, there appears to be no universal statement that all compliance responsibilities rest with providers, and there are only incidental mentions of "provider" in Arts. 9-10 and 13-15, and none at all in Arts.11 and 12.
Art.13 re Transparency and provision of information to deployers is indicative of how what seem at first to be requirements melt away to nothingness. Providers of high-risk AI systems are nominally required to deliver 'transparency'. However, the relevant clauses are qualified out of existence by omitting any requirement of the AI system to explain the rationale underlying any inference it draws, decision it makes, or action it performs:
3. The instructions for use shall contain at least the following information:
(b) the characteristics, capabilities and limitations of performance of the high-risk AI system, including: ...
(iv) where applicable, the technical capabilities and characteristics of the high-risk AI system to provide information that is relevant to explain its output; ...
(vii) where applicable, information to enable deployers to interpret the output of the high-risk AI system and use it appropriately
The associated Art.86 re Right to explanation of individual decision-making similarly avoids delivering any such right. The active words are "the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken" (Art.86-1, emphases added). Neither the 'role' nor the 'main elements' represents an explanation of the rationale. Despite the emptiness of the requirement, the Article also incorporates a string of exceptions, some unclear and extensible. The complete absence of a right of access to the underlying rationale is reaffirmed by the absence from the 1160 words of Art.26 re Obligations of deployers of high-risk AI systems of any requirement to explain inferences, decisions and actions to individuals adversely affected by them. See also Varosanec (2022).
The Recitals contain fine words ("The deployer should also inform the natural persons about their right to an explanation provided under this Regulation" in 93, and "Affected persons should have the right to obtain an explanation where a deployer's decision is based mainly upon the output from certain high-risk AI systems" in 171, emphases added). As in many other places, the fine words in the Recitals are a facade, not to be relied upon.
Section 3 (Obligations of providers and deployers of high-risk Al systems and other parties) is, on the other hand, a little clearer. Art.16's title refers to "Obligations of providers". Providers have obligations under all of Arts.16-25. Deployers may have obligations arising from Art.20, and do in the case of Art.25 (where, rather confusingly, a deployer is "considered to be a 'provider'"), Art.26 (which is headed "Obligations of deployers of high-risk Al systems", and is, as noted earlier in relation to the absence of a right to an explanation, full of words, but far less full of regulatory effect) and Art.27. There are also various procedural obligations under Articles 28-39. Appendix 3 summarises the obligations arising from Arts.8-39.
The "High-Risk / Regulated AI Systems" provisions under Annex III come into effect on 2 August 2026, and those under Annex I on 2 August 2027. Longer deadlines are set for "AI systems that have been already placed on the market / put into service before the AI Act entered into force". It is possible that the fog currently surrounding these provisions may be a little less thick by the time they come into force.
Art.50 is headed "Transparency obligations for providers and deployers of certain AI systems" (emphasis added). The AIA uses the terms 'non-high risk' and 'not high-risk'. The high-level summary uses the term 'Limited-Risk AI Systems', and that is adopted here. There are five categories, expressed in four sub-paras. of Art.50. A careful analysis of these 'non-high risk' types of AI system is presented in Bygrave & Schmidt (2025, pp.2-11).
The first category is "AI systems intended to interact directly with natural persons" (Art.50-1). Examples used in the EC's high-level summary are "chatbots" (a term in widespread use to refer to human-computer interfaces) and "deepfakes" (which is a pejorative term for synthetic media creations that are designed to appear real, do so, and are or may be intended to deceive observers into thinking they are real). However, the sole obligation arising is on "providers" of "AI systems intended to interact directly with natural persons", and is that "the natural persons concerned are informed that they are interacting with an AI system, unless this is obvious ...". On the other hand, the high-level summary states that the obligation is intended to apply to "deployers": "developers and deployers must ensure that end-users are aware that they are interacting with AI (chatbots and deepfakes)" (EC 2024, emphasis added).
Based on the assessment of the meaning of "AI system" in section 2.2, any computer-based system is "obviously" an 'AI system. If so, then all notices such as 'You are using an AI system' are arguably redundant, and hence the provision can be safely ignored. In addition, an exception is provided for "AI systems authorised by law to detect, prevent, investigate or prosecute criminal offences ...".
Further, the transparency obligations (Wachter 2024, p.683-684, emphasis added):
... leave much to be desired given the well-established harms that such systems may cause. For example, in the past, chatbots have advised users to take their own lives, given dieting tips to people battling eating disorders, and produced reputation-damaging outputs (e.g., false sexual assault charges against innocent people). Transparency alone is insufficient to address these issues
The second category is "AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text content" (Art.50-2). However, the sole obligation is on "providers", to "ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated". In addition, four exceptions exist, including "an assistive function for standard editing" (which appears to create an opportunity for avoidance that is very easily exploited), and "AI systems" "authorised by law to detect, prevent, investigate or prosecute criminal offences".
The third category is "an emotion recognition system or a biometric categorisation system" (Art.50-3). The sole obligation in this case is on "deployers", to "inform the natural persons exposed thereto of the operation of the system", which is presumably a requirement to inform them that the system is in operation, rather than to inform them how it works. A (potentially large) exception exists for "AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences". This appears to provide authorisation not only to law enforcement agencies, but also to various categories of public and private sector organisations. If so, the feature is not a protection, but rather a substantial contribution to the burgeoning surveillance society (Clarke 2022).
The fourth category is "an AI system that generates or manipulates image, audio or video content constituting a deep fake" (Art.50-4). The obligation is again on "deployers", but only to "disclose that the content has been artificially generated or manipulated". Exceptions include where "authorised by law" as expressed in a preceding paragraph, where "the content forms part of an evidently artistic, creative, satirical, fictional or analogous work or programme", and "where the AI-generated content has undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility for the publication of the content".
The fifth category is "an AI system that generates or manipulates text ..." (Art.50-4) for other than a law enforcement purpose, whose "deployers ... shall disclose that the text has been artificially generated or manipulated". However, this does not apply if the system's output has "undergone a process of human review or editorial control and where a natural or legal person holds editorial responsibility". This exception, applicable to both the fourth and fifth categories, invites such organisations as may be nominally subject to the provision to circumvent it.
The AIA envisages that the EC will "encourage and facilitate the drawing up of codes of conduct", in two cases "to facilitate the effective implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content" (Arts.50-7, 56), and in another case "including related governance mechanisms, intended to foster the voluntary application to AI systems, other than high-risk AI systems, of some or all of the requirements set out in Chapter III, Section 2" (Art.95). The efficacy of such measures as a regulatory mechanism is in considerable doubt.
During the 3-year period during which the AIA was negotiated through the EU's bureaucratic and legislative processes, Generative AI (OICT 2023, WIPO 2024, Clarke (2025a) burst on the scene, being "adopted more rapidly than both [PCs and the Internet]" (Bick et al. 2024). Central to these services have to date been 'Large Language Models' (LLMs). The EC responded to this innovation by adding provisions relating to "General purpose AI (GPAI) systems" and "GPAI models".
Obligations are imposed on providers of "general-purpose AI systems, generating synthetic audio, image, video or text content" (Art.50-2). The obligation is to "ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated". Four exceptions exist, including "an assistive function for standard editing" (which appears to create an opportunity for avoidance), and "[AI systems] authorised by law to detect, prevent, investigate or prosecute criminal offences". It does not appear that any obligations are placed on "deployers" of "GPAI systems", nor on "end-users" of them.
Obligations are also placed on "GPAI model providers" generally, and to a lesser extent on their "authorised representative[s]" in the EU (Arts.53-54, Annexes XI-XIII), to:
These are not unduly onerous, and are in any case a cost of doing business. Some providers will presumably express the required documentation very carefully, so as to be seen to have achieved compliance while minimising the publication of information that may compromise what the provider perceives to be competitive advantage.
Entities that are "providers of GPAI models with systemic risks", as discussed in section 2.5, and as determined by the EC under Art.51, are subject to additional obligations (Art.55), to:
These are also, at least to some extent, appropriate business practice, and for the most part internal matters subject to limited external disclosure. Further, it is envisaged that compliance will be facilitated by Codes of Practice and in due course a European standard (Arts.55-2, 56). On the other hand, Wachter (2024, p.690-691, 692) notes that
... standards bodies ... CEN and CENELEC do not have direct democratic legitimacy ... this lack of democratic legitimacy is even more worrying due to the far-reaching legal, ethical, political, and economic consequences of the widespread deployment of AI. Standards bodies will be tasked with creating frameworks that interpret the AIA
Providers are not only heavily involved in writing the harmonized standards to which they must adhere but also tasked with assessing whether they comply with those standards. This approach creates a major legal loophole
The "GPAI models" provisions come into effect on 2 February 2026. Longer deadlines are set for "AI systems and "GPAI models that have been already placed on the market / put into service before the AI Act entered into force". See also Art.113.
The AIA envisages that the EC will "encourage and facilitate the drawing up of codes of conduct, including related governance mechanisms, intended to foster the voluntary application to AI systems, other than high-risk AI systems, of some or all of the requirements", and para.2 envisages that these relate to "deployers" as well as "providers" (Art.95). For deployers as for providers, the efficacy of such measures as a regulatory mechanism is in considerable doubt.
In Laux (2023), six principles for the design of oversight mechanisms for AI are proposed: Justification sufficient to deliver legitimacy, Periodical review of compliance, Collective decisions to address corruptibility of individuals, Distributed institutional competence / separation of powers, Justiciability and Accountability, and Transparency. The governance structures and processes created by the AIA deliver on some of those principles, but fall seriously short on others, very importantly Justiciability and Accountability, and Transparency.
The AIA's Enforcement, Remedies and Penalties (Arts.74-101) are complex, with provisions extending across about 9,000 words. Even positive reviews anticipate diversity and uncertainty among interpretations and implementations (e.g. Gstrein et al. 2024, pp.13-16). Generally, such enforcement powers as exist are to be the responsibility of each member-nation's 'market surveillance authority'. These powers are exercised primarily under an existing EU Regulation (EU 2019). The responsibilities in relation to some matters are exercised by "National public authorities or bodies which supervise or enforce the respect of obligations under Union law protecting fundamental rights" (Art.77). Some limited capability is also in the hands of beneficiaries of regulation (Wachter 2024, p.693):
Article 85 enables individuals, or groups of individuals, to launch a complaint with a market surveillance authority if an AI system relating to them infringes the regulation. Article 99(10) grants effective judicial remedies and due process against the actions of a market surveillance authority. And under Article 86, individuals also have the right to receive an explanation about the output of a high-risk AI system that produced legal or similarly significant effects to the health, safety, socio-economic, or any other fundamental rights. None of these individual rights were conceived of in the original text
Further, GPAI model providers are subject to some measures that are the responsibility of an EU-level AI Office (Art.65). The following statement about their "governance" appears in the high-level summary:
How will the AI Act be implemented?
The argument is put by Wachter (2024, p.699) that:
Very limited obligations apply to providers and deployers of GPAI systems ... Governance of GPAI providers overwhelmingly and problematically relies on transparency mechanisms. While it is essential that providers of GPAI models and systems make certain information and documentation available, this is only the first step in adequate governance
The outlines provided above have identified large numbers of weaknesses in the AIA's provisions, in the institutional arrangements, and in the enforceability of the various measures. The origins of some of these weaknesses arise from the EU's complex governance structures and processes, and the framing of AI regulation within the existing market supervision arrangements (Veale & Borgesius 2021, p.112):
... the Draft AI Act ... has severe weaknesses. It is stitched together from 1980s product safety regulation, fundamental rights protection, surveillance and consumer protection law ... these pieces and their interaction may leave the instrument making little sense and impact. The prohibitions range through the fantastical, the legitimising, and the ambiguous ... Counterintuitively, the Draft AI Act may contribute to deregulation more than it raises the regulatory bar
Other weaknesses, however, are argued to derive from (Wachter 2024, p.672):
... the strong lobbying efforts of big tech companies and member states [which] were unfortunately able to water down much of the AIA. An overreliance on self-regulation, self-certification, weak oversight and investigatory mechanisms, and far-reaching exceptions for both the public and private sectors are the product of this lobbying
The remainder of this article reports on an evaluation of the four regulatory regimes established by the AIA, utilising the understanding of it outlined above. The following section first explains the framework used to conduct the evaluation.
The perspective that the author brings to this matter is that of an information systems professional and researcher much of whose consultancy career has had as its focus strategic and policy aspects of transformative and disruptive information technologies. This has involved assessment of regulatory regimes applicable to many technologies and their applications, as diverse as data matching, drones, wearable cameras, 'big data' analytics, and electronic markets such as those delivered by tech platforms such as Uber ride-sharing. From this work, conducted over a 30-year period, a framework has been progressively developed, which is consolidated in a companion paper, Clarke (2025b).
The framework utilises the following definitions:
Regulation is the process whereby a socio-technical system adapts its structure and processes in order to accommodate disturbances or damage that it undergoes, so that it operates and adapts as an integrated whole
A Regulatory Regime is a set of mechanisms that influence or control the way entities behave within a socio-technical system, and that thereby contribute to the achievement of economic, social and/or environmental policy objectives
A regulatory regime can be applied to various categories of objects. In the case of the AIA, the primary focus is artefacts that embody what the AIA refers to as AI technology ("a AI system"), but the regulatory measures apply differentially depending on the particular purposes they are applied to and the risk-level to which the Act assigns them, and the obligations arising from the regulatory regime are assigned to particular enterprise-categories (primarily "providers", but to some extent also "deployers").
The purpose of the framework applied here is to support the evaluation of the efficacy of regulatory regimes generally. Statutory, code and/or case law is likely to at least influence the process of exercising control over behaviour, and in many cases there will be a substantial body of relevant formal law. The focus of the evaluation framework is not the law, however, but the efficacy of the regulatory regime as a whole, including rather than specifically the law. The term 'efficacy' is used as an overall term to encompass all of the desirable elements, including effectiveness, efficiency, flexibility and adaptability.
The mechanisms are ordered into a hierarchical model of seven layers, with formal law (featuring 'government' and 'compliance') in the top two, self-regulation (using the catchwords 'governance', 'safeguards' and 'mitigation') in the middle three, and systemic governance (comprising infrastructural and natural regulatory features) as the bottom two. Three generic entities are distinguished: Regulators, Regulatees and Beneficiaries, with a more detailed model of the players involved presented in Clarke (2025b, Figure 3).
An Evaluation Template is provided, which reflects the set of 16 criteria in Clarke (2025b, Table 2). The evaluation process involves assessing the particular regulatory regime's delivery against each of the criteria, assigning two scores: one a simple ternary indicator 'Yes, Some, No', and the other a subjective score on the scale 0-5 for 6 Process factors and 0-7 for 10 Product and Outcome factors, giving an overall score out of 100. The framework emphatically denies making any claims for validity of the resulting scores. The scoring process is intended as a means of encouraging evaluators to focus on the criteria and the extent to which the particular regime does and does not satisfy them, and then to provide an indicative but debatable overall score.
In the following section, the framework and the scoring sheets are applied to the AIA's four regulatory regimes.
This section reports on assessments of each of the three separate regulatory regimes established by the AIA, conducted by reference to the evaluation framework outlined in the previous section. (The provisions relating to the fourth, GPAI models, after review were found to not qualify as a regulatory regime). The subjective scoring against each of the 16 criteria is conducted by reference to the outlines provided in section 3 above and further sources in the refereed literature. Some of those sources were published during the AIA's gestation period 2021-24 and hence need some care in their use because of the considerable changes in the draft statute arising from very active lobbying by powerful stakeholders including Ministers and government agencies at both EU and member-nation levels, industry associations and major AI providers.
Some aspects of the AIA are common to the four regimes, in particular the definitions discussed in sections 2.2-2.5 above, and aspects of the enforcement measures in section 3.5. Some of the evaluation is abstract and generic. That approach was complemented by keeping in focus a number of specific test-applications, listed in Table 2. These were drawn from a wide range of informal sources that describe real or proposed real-world uses, overlaid by key features of the regimes as outlined above. An attempt was consciously made to achieve diversity among the instances. Given the enormously broad scope of applications involved, however, it was infeasible to achieve anything resembling a representative set.
An outline of this, the strongest regime, is in section 3.1. In principle, the eight items in Appendix 1 are all prohibited. In practice, there are a great many exclusions, and they are sufficiently vague that organisations can readily design around the nominal prohibition. For example:
The shortfalls in the prohibition category are strongly criticised by Wachter (2024, p.679-680):
After a thirty-six-hour negotiation marathon, a compromise was reached ... The final list of prohibited systems leaves much to be desired. This would have been a good opportunity to ban, for instance, biometric categorization systems, 'real-time' and ex-post remote biometric identification in public spaces, predictive policing, and emotion recognition in high-risk areas
In Veale & Borgesius (2021), several serious concerns are identified, including:
" ... a range of problematic loopholes [such that a] cynic might feel the Commission is more interested in prohibitions/rhetorical value than practical effect" (p.99)
" ... the prohibitions concerning manipulative AI systems may have little practical impact [and] the EU legislator has some work to do to make [the social scoring] provision clearly applicable to anything" (p.100)
In Appendix 4A, features of this regime are outlined and scored. The cluster of criteria associated with Process achieved the barest of Passes, and those associated with Product and Outcomes were each scored about 30%, giving an overall Failing grade of 37%. Even allowing for the vagaries of subjective scoring, given that the AIA's prohibition measures are targeted at what were assessed by the EC to be the highest-risk category of "AI systems", it is difficult to see how this could be regarded as a fit-for-purpose regulatory regime.
An outline of this regime is in section 3.2. It applies to AI systems that are a safety product or a safety component of a product, within the meanings of 11 existing EU-wide laws, together with 9 EU Directives that are subject to somewhat varying implementations within individual EU nation-states. The evaluation of the efficacy of that provision requires a very substantial familiarity with many laws of many countries, applied in many contexts, and is not attempted here.
The regime also applies to eight categories of "areas [of application of] AI systems referred to in Annex III" (Art.6-2), reproduced in Appendix 2. Four generic exceptions are declared, plus a further dozen each relevant to specific areas of application. An overriding criterion is declared of "performs profiling of natural persons" (Art.6.3), but it is unclear what is overridden, and the meaning of 'profiling' depends on confusing text in the GDPR which may simply mean 'infers an attribute of a person from data about that person'.
"Providers" of "AI systems" encompassed by these definitions are subject to a substantial and somewhat unclear suite of provisions in Arts.8-39, which amount to 12,000 words. "Deployers" of "AI systems" are subject to a smaller set in Arts.16-25. The investment needed to establish whether an entity is subject to these provisions, and the effort and cost involved if the impositions apply, are so substantial that it can be confidently expected that a substantial industry segment of advisers well-versed in the minutiae will rapidly emerge to ply their trade, and that the exceptions will be very well exercised. For example:
A central concern drawn to attention in section 3.2 is the absence of any obligations on deployers of high-risk AI systems to provide explanations for inferences, decisions and actions unfavourable to people's interests, and of any obligations on providers to enable deployers to do so. See Hacker & Passoth (2022) for a typology of explanations. If it were necessary for a deployer of an AI system to present a rationale for a contested inference, decision or action to a review by the deployer, tribunal or court, then those AI systems that are materially dysfunctional, illegally biased or fraudulent would be quickly exposed, and the problems addressed. The absence of that necessary pre-condition for accountability, alone, shows the scheme to be seriously inadequate.
Further, in Veale & Borgesius (2021), it is noted that:
... a leaked version of the Draft AI Act required providers to specify organisational measures ... [but] the final Proposal instead emphasises the 'user's discretion ... Statements about the need for 'competence, training and authority' only make the recitals (p.104)
For most standalone high-risk systems (and eventually, all such systems), providers can mark the systems as in conformity using only self-assessment" (p.106)
In Appendix 4B, features of this regime are outlined and scored. The cluster of criteria associated with Process was scored as for the previous regime at a bare Pass, but those associated with Product and Outcomes each achieve below 15%, giving an overall grade of a Very Low Fail at 30%. This cannot be regarded as a fit-for-purpose regulatory regime.
An outline of this regime is in section 3.3. It imposes very limited, transparency-only obligations on providers and deployers of five categories of "AI systems".
The first instance, relating to systems that "interact with natural persons", probably requires no action at all. The second requirement is that providers of synthetic content mark it as such (with broad exceptions). The third requirement is that deployers of "emotion recognition systems" and "biometric categorisation systems" declare to people subject to them that that's what they are. The fourth requires deployers of 'deep fake' image, audio or video content disclose that the content has been artificially generated or manipulated. The fifth creates a similarly limited obligation in relation to generated text.
Both the coverage and the efficacy of these provisions appear likely to be very low. For example:
In Appendix 4C, features of this regime are outlined and scored. All three clusters of criteria score dismally low, giving an overall grade of a Very Low Fail at 24%. This is so inadequate that it can barely be called a regulatory regime.
All "AI systems" that escape the prohibited, high-risk and limited-risk categories in the preceding three sections are not subject to any provisions of the AIA. The term 'minimal risk' does not appear in the AIA, but does in the 'High-level summary' (EU 2024), and is widely used for this category. The technologies have enormously broad applicability, the criteria for inclusion in regulated categories are very narrow, and a considerable array of loopholes are designed into each regime. In addition, many application-categories are pioneered in defence, national security and law enforcement context, which are almost entirely free from any obligations under the AIA. So it appears that Minimal-Risk will be by far the largest category.
Of the 12 Test-Applications outlined in Table 2, it appears that 9 escape completely from any obligations under the AIA. Test 11 (diagnosis) is among them, although it may be subject to generic regulation relating to medical devices. Tests 9 and 10 are subject to trivial transparency obligations, without any safeguards, mitigation measures or avenues for recourse in the event of harm being done. The highly-intrusive Test-Application (7), biometrics and pseudo-lie-detection at national borders, is highly-intrusive, appears to be subject to a complex and confusing set of obligations, but in return has been granted at least legitimation, and perhaps legal authorisation.
On the one hand, the Test-Applications can be criticised as being contrived to exaggerate the inadequacies of the regime. On the other hand, they demonstrate the ease of avoiding the need to comply with even these limited requirements, and hence the attractiveness to providers and deployers of ignoring or merely paying lip-service to the requirements.
An outline of this regime is in section 3.4. The first requirement, a mere transparency obligation on providers in relation to "general-purpose AI systems, generating synthetic audio, image, video or text content" (Art.50-2) was outlined in section 4.3 above.
Some limited substantive obligations are imposed on all "GPAI model providers", but only in relation to documentation for provision to, and guidance of, downstream providers and deployers (Art.53). Additional substantive obligations are imposed on all "providers of general-purpose AI models with systemic risk" (Art.55), but these are at most minimum requirements of good business practice. In short, it is very difficult to regard these provisions as being even a very light-handed regulatory regime, and hence the score-sheet has not been applied to them.
Unlike the previous three regimes, it could be argued that the maturity-level of the GenAI field is low enough to justify a very preliminary, 'watch and be prepared to act further' approach to the regulation of upstream model providers. On the other hand, (Wachter 2024, p.694-695) argues that what was nominally justification was actually entirely an exercise in market and institutional power:
Regulation of GAI was another big sticking point during the negotiations that almost caused the AIA not to pass. Even though political agreement was reached in October 2023, Germany, Italy, and France started a coordinated effort in November and December 2023 to remove most provisions on GAI. The three nations even threatened to vote against the whole Act if these provisions were left unchanged
Similar extortionary approaches were used by national governments against the European Parliament to achieve very wide exemptions not only for military and national security matters, but also for law enforcement agencies (Bertuzzi 2023). Those agencies are heavily dependent on large, secretive corporate providers of software and services, and those corporations migrate the techniques and capabilities for their military, national security and law enforcement clients into the general market. In short, the almost complete absence of any AI regulatory regime in those sectors, guarantees that such boundaries as are created by the AIA will be tested, dented, bruised and circumvented.
The purpose of the work reported in this article was declared as examination of the AIA as the underpinnings of a regulatory regime, assessment of it by means of a separately-published framework for the evaluation of regulatory regimes, and the delivery of a comprehensive view of the AIA's efficacy.
The literature evidences a wide range of views on the many facets of the EC's work. Seen through the lens of the selected evaluation framework, the rosiness pales, the uncertainties and ambiguities pile up, the obligations are exposed as being very limited, the avoidability of obligations through the exploitation of the vast numbers of exceptions becomes very apparent, and the credibility of enforcement appears to be very low.
A cynical view would be that the EC's objectives of economic progress and the stimulation of innovation entirely dominate its concern for the interests of individuals. The 'Ethics Guidelines for Trustworthy AI' of its Expert Group (EC 2019) were seen as being inconsistent with economic progress, and a very different approach to design was adopted, with protections treated not as objectives but as enabling factors or constraints. The EC was vindicated in that national governments demanded an even weaker scheme, and forced the European Parliament to abandon its attempts to strengthen the provisions.
A systematic literature review of business-oriented literature, reported in Vainionpoaa et al. (2023), identified many concerns about negative impacts of the AIA on innovation, summarised as being premature regulation, excessive scope, ambiguous expression, unclear requirements, incompatibility with existing regulatory regimes, and onerous compliance obligations. On the contrary, the findings of the analysis in the present paper suggest that compliance obligations will be readily avoided by the large majority of providers and deployers of AI systems. The negative impacts are far more likely to arise from the failure of the EU and its member-nations to exercise control over harmful impacts, leading to public distrust, thereby creating the risk of public backlash, and of non-adoption and even abandonment of technologies which, with appropriate care, have benefits to offer.
The analysis presented above suggests that, contrary to the hopeful tone of many authors, the dysfunctionality of the AIA as a regulatory instrument is deeply embedded in terminology and definitions, and in structures, processes and convoluted expressions. Attempts to adapt individual elements of the regimes in order to deliver effective protections would be piecemeal and pointless. The more appropriate approach is to treat the AIA as a promotional tool for dangerous technologies, with bureaucratic camouflage obscuring its ineffectiveness.
A recent worldwide survey of AI sentiment concluded that "There is a strong public mandate for AI regulation, with 70% believing regulation is necessary. However, only 43% believe current laws are adequate. People expect international laws (76%), national government regulation (69%), and co-regulation with industry (71%)" (Gillespie et al. 2025, p.5). So the question needs to be addressed about what can be done about the situation. An approach is available, whereby a far better balance between macro-economic and human interests could be achieved, reasonable protections against harmful impacts and implications could be devised that constrain business and government only to the extent justifiable, and hence the risk of non-adoption, backlash, luddite behaviour by disaffected publics, and negative return on AI investment could be greatly reduced.
"Co-Regulation ... refers to a regulatory model in which [all] stakeholders have significant input to a set of requirements, and even draft them, but do so within a statutory context that exercises control over the process, and makes the requirements enforceable (Hepburn 2006). A useful term to distinguish such instruments from mere industry codes is 'Statutory Codes'. Elements [of] Formal Regulation are essential, to establish generic legal protections. ... Co-regulation can also be the most effective approach in dealing with the ravages of specific technologies, particularly during a technology's early years of dynamism and opacity" (Clarke 2021). A fuller description of co-regulation, in that case with its focus on Internet privacy, is in Clarke (1999). A specific proposal for a co-regulatory approach to AI is in Clarke (2019). See also Varosanec (2022).
It has been argued that an element of the co-regulatory approach is embedded in the AIA, in the form of codes of practice, and that it would be beneficial for that code mechanism to be further developed. In Bygrave & Schmidt (2025, p.1, pp.12-25), an important distinction is drawn between codes of practice under Arts. 56, plus 53-4 and 55, and (voluntary) codes of conduct under Art.95:
... codes of practice may be regarded as instruments of meta-regulation that are truly embedded in the Act, whereas codes of conduct are simply instruments for potential meta-regulation. As such, codes of practice will likely play a much more crucial role under the Act than will codes of conduct.
This author's contention in this article is, on the other hand, that the evidence demonstrates that the AIA cannot, and will not, deliver the necessary protections for human interests, and hence cannot assure investors, business and government of public acceptance of AI systems. Commentary on the AIA needs to mature beyond the early excitement about the possibility of effective structures, processes, obligations, enforcement, and hence public confidence and successful AI investments. The AIA needs to be seen in the cold, hard light of day, and its inherent inadequacies and impending failure recognised and factored into discussions about a constructive way ahead.
Adapted extract from EC (2024), re Art.5, emphases added. See section 3.1 in this article
The following types of AI system are "Prohibited" according to the AI Act.
Adapted extract from EC (2024), re Art.6, emphases added. See section 3.2 in this article
High risk AI systems are those:
There are 11 Regulations and 9 Directives relating to machinery, toys, lifts, medical devices, and artefacts used in transportation (Annex I).
The eight Use Cases are (Annex III):
Adapted extracts from AIA Arts.8-27. See section 3.2 in this article
Identify the objectives, the object subject to regulation, the regulatory mechanisms that make up the overall regime, the extent of exceptions and exemptions, the key players with particular emphasis on the Regulator, Regulatees and Beneficiaries, the Principles and Rules that apply to Regulatees, and the extent to which the Rules have been articulated through co-regulatory processes.
In col.A, insert Yes, Some or No (coverage); In col.C, insert a subjective evaluation of coverage.
Identify the objectives, the object subject to regulation, the regulatory mechanisms that make up the overall regime, the extent of exceptions and exemptions, the key players with particular emphasis on the Regulator, Regulatees and Beneficiaries, the Principles and Rules that apply to Regulatees, and the extent to which the Rules have been articulated through co-regulatory processes.
In col.A, insert Yes, Some or No (coverage); In col.C, insert a subjective evaluation of coverage.
Identify the objectives, the object subject to regulation, the regulatory mechanisms that make up the overall regime, the extent of exceptions and exemptions, the key players with particular emphasis on the Regulator, Regulatees and Beneficiaries, the Principles and Rules that apply to Regulatees, and the extent to which the Rules have been articulated through co-regulatory processes.
In col.A, insert Yes, Some or No (coverage); In col.C, insert a subjective evaluation of coverage.
TEXT
AIA (2024) Regulation (EU) 2024/1689, European Union, August 2024, at https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
Albus J.S. (1991) 'Outline for a theory of intelligence' IEEE Trans Syst, Man Cybern 21, 3 (1991) 473--509, at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.410.9719&rep=rep1&type= pdf
Bertuzzi L. (2023) 'European Union squares the circle on the world's first AI rulebook' Euractiv, 9 Dec 2023, at https://www.euractiv.com/section/tech/news/european-union-squares-the-circle-on-the-worlds-first-ai-rulebook/
Bick A., Blandin A. & Deming D.J. (2024) 'The Rapid Adoption of Generative AI' [US] National Bureau of Economic Research, Working Paper 32966, September 2024, at https://static1.squarespace.com/static/60832ecef615231cedd30911/t/66f0c3fbabdc0a173e1e697e/1727054844024/BBD_GenAI_NBER_Sept2024.pdf
Blauth T.F., Gstrein O.J. & Zwitter A. (2022) 'Artificial Intelligence Crime: An Overview of Malicious Use and Abuse of AI' IEEE Access 10 (2022) 77110-77122, at https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9831441
Boden M. (2016) 'AI: Its Nature and Future' Oxford University Press, 2016
Bradford A. (2020) 'The Brussels Effect: How the European Union Rules the World' Oxford University Press, 2020
Bygrave L.A. & Schmidt R. (2025) 'Regulating Non-High-Risk AI Systems under the EUÅfs Artificial Intelligence Act, with Special Focus on the Role of Soft Law' University of Oslo Faculty of Law Legal Studies Research Paper Series No. 2024-10, 29 January 2025, at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4997886
Cancela-Outeda C. (2024) 'The EU's AI act: A framework for collaborative governance' Internet of Things 27 (October 2024) 101291, at https://www.investigo.biblioteca.uvigo.es/xmlui/bitstream/handle/11093/7442/2024_cancela_eu_ai.pdf
Clarke R. (1991) 'A Contingency Approach to the Application Software Generations' Database 22,3 (Summer 1991) 23-34, PrePrint at http://www.rogerclarke.com/SOS/SwareGenns.html
Clarke R. (1993) 'Profiling: A Hidden Challenge to the Regulation of Data Surveillance' Journal of Law and Information Science 4,2 (December 1993), PrePrint at http://rogerclarke.com/DV/PaperProfiling.html
Clarke R. (1994) 'The Digital Persona and its Application to Data Surveillance' The Information Society 10,2 (June 1994) 77-92, PrePrint at http://rogerclarke.com/DV/DigPersona.html
Clarke R. (1999) 'Internet Privacy Concerns Confirm the Case for Intervention' Commun. ACM 42, 2 (February 1999) 60-67, PrePrint at http://www.rogerclarke.com/DV/CACM99.html
Clarke R. (2014) 'Promise Unfulfilled: The Digital Persona Concept, Two Decades Later' Information Technology & People 27, 2 (Jun 2014) 182 - 207, PrePrint at http://rogerclarke.com/ID/DP12.html
Clarke R. (2015) 'The Prospects of Easier Security for SMEs and Consumers' Computer Law & Security Review 31, 4 (August 2015) 538-552, PrePrint at http://www.rogerclarke.com/EC/SSACS.html
Clarke R. (2019) 'Regulatory Alternatives for AI' Computer Law & Security Review 35, 4 (2019) 398-409, PrePrint at http://www.rogerclarke.com/EC/AIR.html
Clarke R. (2021) 'A Comprehensive Framework for Regulatory Regimes as a Basis for Effective Privacy Protection' Proc. 14th Computers, Privacy and Data Protection Conference, Brussels, 27-29 January 2021, PrePrint at http://rogerclarke.com/DV/RMPP.html
Clarke R. (2022) 'Responsible Application of Artificial Intelligence to Surveillance: What Prospects?' Information Polity 27, 2 (Jun 2022) 175-191, PrePrint at http://rogerclarke.com/DV/AIP-S.html
Clarke R. (2023) 'The Re-Conception of AI: Beyond Artificial, and Beyond Intelligence' IEEE Transactions on Technology & Society 4,1 (March 2023) 24-33, PrePrint at http://rogerclarke.com/EC/AITS.html
Clarke R. (2025a) 'Principles for the Responsible Application of Generative AI' Forthcoming, Computer Law & Security Review, April 2025, at http://rogerclarke.com/EC/RGAI-C.html
Clarke R. (2025b) 'Regulatory Regimes for Disruptive IT: A Framework for Their Design and Evaluation' Working Paper, Xamax Consultancy Pty Ltd, March 2025, at http://rogerclarke.com/EC/FRR.html
CoE (2024) 'Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law' Council of Europe, 5 September 2024, at https://rm.coe.int/1680afae3c
Ebers M. (2024) 'Truly Risk-based Regulation of Artificial Intelligence: How to Implement the EU's AI Act' European Journal of Risk Regulation (November 2024) 1?20, at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4870387
EU (2019) 'Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products', European Commission, 2019, at https://eur-lex.europa.eu/eli/reg/2019/1020/oj/eng
EC (2021) 'Proposal for a Regulation on a European approach for Artificial Intelligence' European Commission, 21 April 2021, at https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=75788
EC (2024) 'High-level summary of the AI Act' European Union, 27 February 2024, at https://artificialintelligenceact.eu/high-level-summary/https://artificialintelligenceact.eu/high-level-summary/
EC (2025) 'Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act)' European Commission, Draft of 4 February 2025, at https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act
Edwards L. (2022) 'The EU AI Act: a summary of its significance and scope' Ada Lovelace Institute, April 2022, at https://www.adalovelaceinstitute.org/wp-content/uploads/2022/04/Expert-explainer-The-EU-AI-Act-11-April-2022.pdf
Gillespie, N., Lockey S., Ward T., Macdade A., & Hassed G. (2025) ' Trust, atttudes and use of artificial intelligence: A global study' The University of Melbourne and KPMG, 29 April 2025, DOI 10.26188/28822919, at https://kpmg.com/au/en/home/insights/2025/04/trust-in-ai-global-insights-2025.html
Greenleaf G. (2021) 'The 'Brussels effect' of the EU's 'AI Act' on data privacy outside Europe' Privacy Laws & Business 1 (2021) 3-7, at https://papers.ssrn.com/sol3/Delivery.cfm?abstractid=3898904
Greenleaf G. (2024) 'EU AI Act: The second most important data privacy law' Privacy Laws & Business, June 2024, at https://papers.ssrn.com/sol3/Delivery.cfm?abstractid=4913686
Gstrein O.J., Haleem N. & Zwitter A. (2024) 'General-purpose AI regulation and the European Union AI Act' Internet Policy Review 13,3 (2024) 1-26, at https://policyreview.info/pdf/policyreview-2024-3-1790.pdf
Hacker P. & Passoth J.-H. (2022) 'Varieties of AI Explanations Under the Law. From the GDPR to the AIA, and Beyond' Chapter in Holzinger A. et al. (eds) 'xxAI - Beyond Explainable AI' Springer Nature, 2022, pp 343?373, at https://link.springer.com/chapter/10.1007/978-3-031-04083-2_17
Hepburn G. (2006) 'Alternatives To Traditional Regulation' OECD Regulatory Policy Division, undated, apparently of 2006, at http://www.oecd.org/gov/regulatory-policy/42245468.pdf
Karanikolas N., Manga E., Samaridi N., Tousidou E. & Vassilakopoulos N. (2023) 'Large Language Models versus Natural Language Understanding and Generation' Proc. PCI, 24?26 November 2023, Lamia, Greece, pp.278-290, at https://dl.acm.org/doi/pdf/10.1145/3635059.3635104
van Kolfschooten H. (2022) 'EU Regulation of Artificial Intelligence: Challenges for PatientsÅf Rights' Common Market Law, 59,1 (2022) 81-112, at https://www.researchgate.net/profile/Hannah-Van-Kolfschooten/publication/367818376_EU_Regulation_of_Artificial_Intelligence_Challenges_for_Patients'_Rights/links/66e421ccb1606e24c22779b9/EU-Regulation-of-Artificial-Intelligence-Challenges-for-Patients-Rights.pdf
Land F. (2012) 'Remembering LEO' in A. Tatnall (ed.) 'Reflections on the History of Computing: Preserving Memories and Sharing Stories', AICT-387, Springer, 2012, pp.22-42, at https://inria.hal.science/hal-01526811/document
Laux J. (2023) 'Institutionalised distrust and human oversight of artifcial intelligence: towards a democratic design of AI governance under the European Union AI Act' AI & Society 39 (2024) 2853?2866, at https://link.springer.com/content/pdf/10.1007/s00146-023-01777-z.pdf
Lieto A. & Radicioni D.P. (2016) 'From Human to Artificial Cognition and Back: New Perspectives on Cognitively Inspired AI Systems' Cognitive Systems Research 39 (September 2016) 1-3, at https://philpapers.org/archive/LIEFHT.pdf
McCarthy J. (2007) 'What is artificial intelligence?' Department of Computer Science, Stanford University, 2007, at http://www-formal.stanford.edu/jmc/whatisai/node1.html
McCarthy J., Minsky M.L., Rochester N. & Shannon C.E. (1955) 'A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence' Reprinted in AI Magazine 27, 4 (2006), at https://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1904/1802
Neuwirth R. J. (2023) 'Prohibited artificial intelligence practices in the proposed EU artificial intelligence act (AIA)' Computer Law & Security Review 48 (2023) 105798
NIST (2025) 'NIST Glossary' Compjter Security Resource Center, National Institute of Standards and Technology, accessed 25 March 2025, at https://csrc.nist.gov/glossary/term/risk
Novelli C., Casolari F., Rotolo A., Taddeo M. & Floridi L. (2024) 'AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act' Digital Society 3,13 (2024) 1-29, at https://link.springer.com/content/pdf/10.1007/s44206-024-00095-1.pdf
OECD (2024) 'OECD Explanatory Memorandum on the Updated OECD Definition of an AI System' OECD Artificial Intelligence Paper No.8, Organization for Economic Cooperation and Development, March 2024, at https://www.oecd.org/content/dam/oecd/en/publications/reports/2024/03/explanatory-memorandum-on-the-updated-oecd-definition-of-an-ai-system_3c815e51/623da898-en.pdf
OICT (2023) 'Generative AI Primer' UN Office of Information and Communications Technology, 29 Aug 2023, at https://unite.un.org/sites/unite.un.org/files/generative_ai_primer.pdf
PW (2025) 'European Commission Publishes Guidance on Prohibited AI Practices Under the EU AI Act' Paul, Weiss, 25 February 2025, at https://www.paulweiss.com/practices/litigation/artificial-intelligence/publications/european-commission-publishes-guidance-on-prohibited-ai-practices-under-the-eu-ai-act?id=56629
Russell S.J. & Norvig P. (2003) 'Artificial intelligence: a modern approach' 2nd edition, Prentice Hall, 2003, 3rd ed. 2009, 4th ed. 2020
Vainionpaa F., Vayrynen K., Lanamaki A. & Bhandari A. (2023) 'A Review of Challenges and Critiques of the European Artificial Intelligence Act (AIA)' Proc. Int'l Conf. Infor. Syst., 2023, 14, at https://oulurepo.oulu.fi/bitstream/handle/10024/47651/nbnfioulu-202402061598.pdf
Varosanec I. (2022) 'On the path to the future: mapping the notion of transparency in the EU regulatory framework for AI' International Review of Law, Computers and Technology 36,2 (2022) 95?117, at https://www.tandfonline.com/doi/pdf/10.1080/13600869.2022.2060471
Veale M. & Borgesius F.Z. (2021) 'Demystifying the Draft EU Artificial Intelligence Act' Computer Law Review International 22,4 (2021) 97-112, at https://arxiv.org/pdf/2107.03721
Wachter S. (2024) 'Limitations and Loopholes in the EU AI Act and AI Liability Directives: What This Means for the European Union, the United States, and Beyond' Yale Journal of Law & Technology 26,3 (2024) 671-718, at https://yjolt.org/sites/default/files/wachter_26yalejltech671.pdf
WIPO (2024) 'Patent Landscape Report: Generative Artificial Intelligence' World Intellectual Property Organization, 2024, at https://www.wipo.int/web-publications/patent-landscape-report-generative-artificial-intelligence-genai/en/index.html
Woersdoerfer M. (2024) 'Mitigating the adverse effects of AI with the European Union's artificial intelligence act: Hype or hope?' Global Business and Organizational Excellence 43,3 (January 2024) 106-126, at https://papers.ssrn.com/sol3/Delivery.cfm?abstractid=4630087
TEXT
Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professorial Fellow associated with UNSW Law & Justice, and a Visiting Professor in Computing in the College of Systems & Society at the Australian National University.
Personalia |
Photographs Presentations Videos |
Access Statistics |
![]() |
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax. From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 75 million in late 2024. Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer |
Xamax Consultancy Pty Ltd ACN: 002 360 456 78 Sidaway St, Chapman ACT 2611 AUSTRALIA Tel: +61 2 6288 6916 |
Created: 24 March 2025 - Last Amended: 21 April 2025 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/EC/RRE-AIA.html
Mail to Webmaster - © Xamax Consultancy Pty Ltd, 1995-2024 - Privacy Policy