Roger Clarke's Web-Site

© Xamax Consultancy Pty Ltd,  1995-2024
Photo of Roger Clarke

Roger Clarke's 'Evaluation of EC's AI Regulation'

Would the European Commission's Proposed Artificial Intelligence Act
Deliver the Necessary Protections?

Annex 4: The 50 Principles and High-Risk AI Systems

It is an Annex to the article of the above name

Version of 31 August 2021

Roger Clarke **

© Xamax Consultancy Pty Ltd, 2021

Available under an AEShareNet Free
for Education licence or a Creative Commons 'Some
Rights Reserved' licence.

This document is at http://www.rogerclarke.com/DV/EC21-Ann4.html


This Annex contains an annotated extract of The 50 Principles (Clarke 2019)
See here for a PDF version of the extract

The relevant segments of text from the EC Proposal are shown in italics, followed by cross-references to the locations in which that text occurs.

Principles that are evident in the EC Proposal (even if only partly covered, or weakly expressed) are in bold-face type.

The majority of the relevant passages in the EC Proposal are in Chapter 2 (Requirements) Arts. 8-15 and Chapter 3 (Obligations) Arts. 16-29, but a number of other, specific Articles are also relevant.

The following Principles apply to each entity responsible for each phase of AI research, invention, innovation, dissemination and application.

1. Assess Positive and Negative Impacts and Implications

1.1 Conceive and design only after ensuring adequate understanding of purposes and contexts

"Training, validation and testing data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used" (Art. 10.4).

Art. 10.4 contains reference to context, but it only applies to "training, validation and testing data sets", and not to broader aspects of the design, and not to the operation nor to the use of AI systems. A score of 0.1 is assigned.

1.2 Justify objectives

Nothing was found that imposes any such requirement, including in Art. 9 (Risk Management).

1.3 Demonstrate the achievability of postulated benefits

Nothing was found that imposes any such requirement, including in Art. 9 (Risk Management).

1.4 Conduct impact assessment, including risk assessment from all stakeholders' perspectives

"A risk management system shall be established, implemented, documented and maintained ... " (Art. 9.1), and "When implementing the risk management system ..., specific consideration shall be given to whether the high-risk AI system is likely to be accessed by or have an impact on children" (Art. 9.8)

Due to the provisions' vagueness, mis-direction to organisational risk assessment rather than impact assessment from the perspective of the people affected, and limitation of consideration of impact to that on children, a score of 0.3 is assigned.

1.5 Publish sufficient information to stakeholders to enable them to conduct their own assessments

Nothing was found that imposes any such requirement, including in Art. 9 (Risk Management).

1.6 Conduct consultation with stakeholders and enable their participation in design

Nothing was found that imposes any such requirement, including in Art. 9 (Risk Management).

1.7 Reflect stakeholders' justified concerns in the design

Nothing was found that imposes any such requirement, including in Art. 9 (Risk Management).

1.8 Justify negative impacts on individuals ('proportionality')

Mentions of proportionality abound in relation to the potential impacts of regulation on organisations, but nothing was found that imposes any such requirement in relation to potential impacts on people affected by High Risk AI Systems, including in Art. 9 (Risk Management).

1.9 Consider alternative, less harmful ways of achieving the same objectives

Nothing was found that imposes any such requirement, including in Art. 9.4 (Risk Management).

2. Complement Humans

2.1 Design as an aid, for augmentation, collaboration and inter-operability

Nothing was found that imposes any such requirement, including in Art. 9 (Risk Management).

2.2 Avoid design for replacement of people by independent artefacts or systems, except in circumstances in which those artefacts or systems are demonstrably more capable than people, and even then ensuring that the result is complementary to human capabilities

Nothing was found that imposes any such requirement, including in Art. 9 (Risk Management).

3. Ensure Human Control

3.1 Ensure human control over AI-based technology, artefacts and systems

"High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons"(Art. 14.1), possibly encompassing detection of "anomalies, dysfunctions and unexpected performance" (14.4(a)), awareness of the possibility of "automation bias" (14.4(b)), 'correct interpretation of the output' (14.4(c)), and an ability "to decide ... not to use the high-risk AI system or [to] otherwise disregard, override or reverse the output" (14.4(d)).

Because this is a (qualified) obligation only of providers, this provision alone does nothing to ensure the features are effectively designed, effectively communicated to user organisations and thence to users, and applied, appropriately or even at all. A score of 0.5 is assigned.

3.2 In particular, ensure human control over autonomous behaviour of AI-based technology, artefacts and systems

"High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons"(Art. 14.1).

"The measures ... shall enable the individuals to whom human oversight is assigned ... as appropriate to the circumstances ... to ... (e) be able to intervene on the operation of the high-risk AI system or interrupt the system through a 'stop' button or a similar procedure" (Art. 14.4).

The first provision could be interpreted as observation without control. The second adds control to the requirement, but is subject to the unclear and potentially substantial qualification "as appropriate to the circumstances". A score of 0.7 is assigned.

3.3 Respect people's expectations in relation to personal data protections, including:

The EU Proposal asserts that "The proposal is without prejudice and complements the General Data Protection Regulation (Regulation (EU) 2016/679)" (EM s.1.2).

However, "To the extent that it is strictly necessary for the purposes of ensuring bias monitoring, detection and correction in relation to the high-risk AI systems, the providers of such systems may process special categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725, subject to appropriate safeguards for the fundamental rights and freedoms of natural persons, including technical limitations on the re-use and use of state-of-the-art security and privacy-preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued" (Art. 10.5).

Nothing in the Requirements and Obligations draws the attention of providers and users to the GDPR. Nothing in the GDPR requires public visibility, public design consultation or public design participation. A score of 0.9 is assigned.

3.4 Respect each person's autonomy, freedom of choice and right to self-determination

Nothing was found that imposes any such requirement, including in Art. 9 (Risk Management).

3.5 Ensure human review of inferences and decisions prior to action being taken

"For [AI systems intended to be used for the 'real-time' and 'post' remote biometric identification of natural persons] ... ensure that ... no action or decision is taken by the user on the basis of the identification resulting from the system unless this has been verified and confirmed by at least two natural persons" (Art. 14.5).

The clause applies only to 1 of 20 categories of High-Risk AI systems, and not at all to any of the great many other-than-high-risk AI systems, and it is in qualified by the unclear expression "[action or decision taken] on the basis of the identification resulting from the system". A score of 0.2 has been assigned.

3.6 Avoid deception of humans

Nothing was found that imposes any such requirement, including in Art. 9 (Risk Management).

3.7 Avoid services being conditional on the acceptance of AI-based artefacts and systems

Nothing was found that imposes any such requirement, including in Art. 9 (Risk Management).

4. Ensure Human Safety and Wellbeing

4.1 Ensure people's physical health and safety ('nonmaleficence')

"[I]nstructions for use ... shall specify ... any known or foreseeable circumstance, related to the use of the high- risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety ..." (Art. 13.2-3).

"Human oversight shall aim at preventing or minimising the risks to health, safety ... that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter" (Art. 14.2).

"Any ... user or other third-party shall be considered a provider for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances: ...
(b) they modify the intended purpose of a high-risk AI system already placed on the market or put into service;
(c) they make a substantial modification to the high-risk AI system" (Art. 28.1).

However, the complexity of wording ensures the existence of considerable uncertainty, a great deal of scope for regulatory prevarication, and large numbers of loopholes. A score of 0.7 is assigned.

4.2 Ensure people's psychological safety, by avoiding negative effects on their mental health, emotional state, inclusion in society, worth, and standing in comparison with other people

Nothing was found that imposes any such requirement, including in Art. 9 (Risk Management).

4.3 Contribute to people's wellbeing ('beneficence')

Nothing was found that imposes any such requirement.

4.4 Implement safeguards to avoid, prevent and mitigate negative impacts and implications

"In identifying the most appropriate risk management measures, the following shall be ensured ... elimination or reduction of risks ... adequate mitigation ..." (Art. 9.4).

"High-risk AI systems that continue to learn ... shall be developed in such a way to ensure that possibly biased outputs due to outputs used as an input for future operations (`feedback loops') are duly addressed with appropriate mitigation measures" (Art. 15.3).

The second is clumsy, unclear and may have little impact. However, the first of the two provides coverage of the Principle, so a score of 1.0 is assigned.

4.5 Avoid violation of trust

Nothing was found that imposes any such requirement, including in Art. 9 (Risk Management).

4.6 Avoid the manipulation of vulnerable people , e.g. by taking advantage of individuals' tendencies to addictions such as gambling, and to letting pleasure overrule rationality

Nothing was found that imposes any such requirement, including in Art. 9 (Risk Management).

5. Ensure Consistency with Human Values and Human Rights

5.1 Be just / fair / impartial, treat individuals equally, and avoid unfair discrimination and bias, not only where they are illegal, but also where they are materially inconsistent with public expectations

The term 'discrimination' occurs 26 times in the Explanatory Memorandum and Preamble, but not at all in the Principles. Similarly, the terms 'just' and 'justice'are used in preliminary text but not in the Principles.

"Appropriate data governance and management practices shall apply ...[concerning in particular] ... (f) examination in view of possible biases" (Art. 10.2).

" ... enable the individuals to whom human oversight is assigned to do ... as appropriate to the circumstances (b) remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (`automation bias')" (Art. 14.4).

"High-risk AI systems that continue to learn ... shall be developed in such a way to ensure that possibly biased outputs due to outputs used as an input for future operations (`feedback loops') are duly addressed with appropriate mitigation measures" (Art. 15.3).

The first two passages only require vigilance, in the second case qualified by "as appropriate to the circumstances", not avoidance of bias. The third applies only to machine learning (ML) applications, and appears to authorise bias in that it requires only mitigation not prevention.

A score of 0.3 is assigned.

5.2 Ensure compliance with human rights laws

"[I]nstructions for use ... shall specify ... any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to ... fundamental rights" (Art. 13.2-3).

"Human oversight shall aim at preventing or minimising the risks to ... fundamental rights that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter" (Art. 14.2).

"Any ... user or other third-party shall be considered a provider for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances: ...
(b) they modify the intended purpose of a high-risk AI system already placed on the market or put into service;
(c) they make a substantial modification to the high-risk AI system" (Art. 28.1).

However, the complexity of wording ensures the existence of considerable uncertainty, a great deal of scope for regulatory prevarication, and large numbers of loopholes. Further, nothing in the Requirements and Obligations reminds providers and users of their obligations in relation to human rights. A score of 0.7 is assigned.

5.3 Avoid restrictions on, and promote, people's freedom of movement

As per P5.2, a score of 0.7 is assigned.

5.4 Avoid interference with, and promote privacy, family, home or reputation

As per P5.2, a score of 0.7 is assigned.

5.5 Avoid interference with, and promote, the rights of freedom of information, opinion and expression, of freedom of assembly, of freedom of association, of freedom to participate in public affairs, and of freedom to access public services

As per P5.2, a score of 0.7 is assigned.

5.6 Where interference with human values or human rights is outweighed by other factors, ensure that the interference is no greater than is justified ('harm minimisation')

"Human oversight shall aim at preventing or minimising the ... that may emerge when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable misuse, in particular when such risks persist notwithstanding the application of other requirements set out in this Chapter" (Art. 14.2).

"Any ... user or other third-party shall be considered a provider for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the following circumstances: ...
(b) they modify the intended purpose of a high-risk AI system already placed on the market or put into service;
(c) they make a substantial modification to the high-risk AI system" (Art. 28.1).

Nothing requires any evaluation to be undertaken as to whether interference with human values or human rights is outweighed by other factors. Moreover, the complexity of wording ensures the existence of considerable uncertainty, a great deal of scope for regulatory prevarication, and large numbers of loopholes. A score of 0.5 is assigned.

6. Deliver Transparency and Auditability

6.1 Ensure that the fact that a process is AI-based is transparent to all stakeholders

Art. 13 requires transparency of the operation of AI systems to users, but not to those affected by the system. Users have no obligations relating to transparency. A small proportion of High Risk AI systems are also subject to the Art. 52 transparency provisions. A score of 0.3 is assigned.

6.2 Ensure that data provenance, and the means whereby inferences are drawn from it, decisions are made, and actions are taken, are logged and can be reconstructed

"High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (`logs') ... [conformant with] recognised standards or common specifications ... [ensuring an appropriate] level of traceability ... " (Arts. 12.1-3).

"Users of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system, to the extent such logs are under their control" (Art. 29.5).

These provisions address only one small part of the Principle, logging, and even then only in respect of "events", not of "means whereby inferences are drawn from it, decisions are made, and actions are taken". Further, the criterion applied is only "traceability", which is far less stringent than "means ... can be reconstructed", and user organisations are invited to make arrangements such that the logs are not under their control.

"High-risk AI systems shall be designed and developed in such a way [as] to ensure that their operation is sufficiently transparent to enable users to interpret the system's output and use it appropriately" (Art. 13.1).

This provision requires sufficient transparency to enable interpretation and appropriate use, but this is far less stringent than "ensure ... means ... can be reconstructed".

A score of 0.3 is assigned.

6.3 Ensure that people are aware of inferences, decisions and actions that affect them, and have access to humanly-understandable explanations of how they came about

Nothing was found that imposes any such requirement, including in Art. 9 (Risk Management) and Art. 12 (Record-Keeping).

GDPR Art. 13.2(f), Art. 14.2(g) and Art.15.1(h) require personal data controllers to "provide the data subject with ... information ... [including, in the case of] the existence of automated decision-making ... meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject". Further, Art. 22 creates a qualified "right not to be subject to a decision based solely on automated processing". Some commentators read the combination of these provisions as somehow implying a right to an explanation for decisions whether made soon after or long after data collection, despite the absence of any such expression in the Article.

The drafting complexity is such that a wide array of qualifications and escape clauses exist, and analysis suggests that such an optimistic interpretation is unjustified (Wachter et al. 2017). In addition, these GDPR articles apply only to "a decision based solely on automated processing", and not at all where the decision is reviewed (however cursorily) by a human being. It also needs to be appreciated that, in the case of decisions based on opaque and a-rational AI/ML inferencing techniques such as neural networks, it is not possible to undertake a meaningful review.

In this assessment, the position is adopted that existing EU law does not provide any right to a humanly-understandable explanation for decisions arising from AI systems, and hence P6.3 is not satisfied.

7. Embed Quality Assurance

7.1 Ensure effective, efficient and adaptive performance of intended functions

Nothing was found that imposes any such requirement, because Art. 17 (Quality Management System) does not relate quality to the AI system's intended functions, and hence an AI system can perform very poorly but not be in breach of Art. 17.

7.2 Ensure data quality and data relevance

"Appropriate data governance and management practices shall apply [whether or not the AI system makes use of] techniques involving the training of models with data ..." (Art. 10.2, 10.6) .

Where the system makes use of "techniques involving the training of models with data", "training, validation and testing data sets" are required to "be relevant, representative, free of errors and complete" and to "have the appropriate statistical properties" (Art. 10.3).

Art. 10.2 articulates the first part of P7.2 ("Ensure data quality ..."), but, when compared with Guidelines for the Responsible Application of Data Analytics, the articulation is not comprehensive. Art. 10.3 provides a further, very specific extension to that articulation.

A score of 0.4 is assigned (60% of 0.7 of the Principle, relating to data quality).

"[T]o the extent the user exercises control over the input data, that user shall ensure that input data is relevant in view of the intended purpose of the high-risk AI system" (Art. 29.3).

Because a user organisation can avoid responsibility for only using data that is relevant, irrelevant data is permitted to cause harm to affected individuals, without any entity being liable for that harm.

A score of 0.1 is assigned (1/6th of 0.3 of the Principle, relating to relevance).

Overall, a score of 0.5 is assigned.

7.3 Justify the use of data, commensurate with each data-item's sensitivity

Nothing was found that imposes any such requirement. "Training, validation and testing data sets shall be subject to appropriate data governance and management practices [including] (a) the relevant design choices" (Art. 10.2) is too limited and too vague to contribute to the Principle.

7.4 Ensure security safeguards against inappropriate data access, modification and deletion, commensurate with its sensitivity

"High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of ... cybersecurity ... appropriate to the relevant circumstances and the risks" (Art. 15-1,4).

The vague term 'cybersecurity' may refer to assurance of service, or toany, some or all of assurance of sustained quality of service or of data, or to assurance of access to data only by authorised entities for authorised purposes. A score of 0.7 is assigned.

7.5 Deal fairly with people ('faithfulness', 'fidelity')

Nothing was found that imposes any such requirement.

7.6 Ensure that inferences are not drawn from data using invalid or unvalidated techniques

Nothing was found that imposes any such requirement, including in Art. 17 (Quality Management System), which requires only "written policies, procedures and instructions ... [regarding] (b) techniques, procedures and systematic actions ..." (Art. 17.1), and imposes no actual quality requirements in relation to the techniques used to draw inferences.

7.7 Test result validity, and address the problems that are detected

"Training, validation and testing data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that are particular to the specific geographical, behavioural or functional setting within which the high-risk AI system is intended to be used" (Art. 10.4).

Art. 10.4 contains an oblique reference to validity in a context, but it only applies to "training, validation and testing data sets", and not to the overall design, or the operation nor to the use of AI systems, let alone to the validity of specific inferences.

"Providers ... shall put a quality management system in place [including] examination, test and validation procedures to be carried out before, during and after the development of the high-risk AI system ..." (Art. 17.1(d)). However, "the implementation ... shall be proportionate to the size of the provider's organisation" (Art. 17.2).

Because the first provision is weakened by the second, and there is no express requirement that problems that are detected are addressed, the net effect is less than full correspondence with the Principle. An overall score of 0.8 is assigned.

7.8 Impose controls in order to ensure that the safeguards are in place and effective

Nothing was found that imposes any such requirement. The EC Proposal uses the term 'controls' to refer to 'safeguards', and contains nothing about control arrangements to ensure that the intended safeguards are in place, operational and effective.

7.9 Conduct audits of safeguards and controls

Nothing was found that imposes any such requirement.

8. Exhibit Robustness and Resilience

8.1 Deliver and sustain appropriate security safeguards against the risk of compromise of intended functions arising from both passive threats and active attacks, commensurate with the significance of the benefits and the potential to cause harm

"High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of ... robustness ... and perform consistently in those respects throughout their lifecycle" (Art. 15.1). A score of 1.0 is assigned.

8.2 Deliver and sustain appropriate security safeguards against the risk of inappropriate data access, modification and deletion, arising from both passive threats and active attacks, commensurate with the data's sensitivity

"High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of ... cybsersecurity ... and perform consistently in those respects throughout their lifecycle" (Art. 15.1).

The vague term 'cybersecurity' may refer to assurance of service, or to assurance of any, some or all of sustained quality of service or of data, or to assurance of access to data only by authorised entities for authorised purposes. A score of 0.7 is assigned.

8.3 Conduct audits of the justification, the proportionality, the transparency, and the harm avoidance, prevention and mitigation measures and controls

Nothing was found that imposes any such requirement.

8.4 Ensure resilience, in the sense of prompt and effective recovery from incidents

"High-risk AI systems shall be resilient as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular due to their interaction with natural persons or other systems ... [and against] attempts by unauthorised third parties to alter their use or performance by exploiting the system vulnerabilities ... appropriate to the relevant circumstances and the risks" (Art. 8.4-3-4). A score of 1.0 is assigned.

9. Ensure Accountability for Obligations

9.1 Ensure that the responsible entity is apparent or can be readily discovered by any party

"Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the nature of the artificial intelligence technologies and the risks of the high-risk AI system" (Art. 61).

Because the system is for the provider and the market surveillance authority alone, and no provision is made for accessibility even by user organisations, let alone people affected by the system and their advocates, this makes only a small contribution to the Principle. A score of 0.3 is assigned.

9.2 Ensure that effective remedies exist, in the form of complaints processes, appeals processes, and redress where harmful errors have occurred

Although Ch.3, Arts. 63-68 is headed "Enforcement", Arts. 71-72 are also relevant. "Where, in the course of ... evaluation, the market surveillance authority finds that the AI system does not comply with the requirements and obligations laid down in this Regulation, it shall without delay require the relevant operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from the market, or to recall it within a reasonable period, commensurate with the nature of the risk, as it may prescribe" (Art. 65.2 et seq. See also Art. 67).

"Member States shall lay down the rules on penalties, including administrative fines, applicable to infringements of this Regulation and shall take all measures necessary to ensure that they are properly and effectively implemented. The penalties provided for shall be effective, proportionate, and dissuasive" (Art. 71.1). (Maximum) penalties are prescribed (Arts. 71.3-5). See also Art. 72.

However, nothing in the EC Proposal requires providers or users to even receive and process complaints, let alone deal with problems that the public notifies to them. Moreover, no scope exists for affected individuals to initiate any kind of action, and hence the public appears to be entirely dependent on action being taken by each national 'market surveillance authority'.

And overall score of 0.7 is assigned.

10. Enforce, and Accept Enforcement of, Liabilities and Sanctions

10.1 Ensure that complaints, appeals and redress processes operate effectively

Nothing was found that imposes any such requirement.

10.2 Comply with external complaints, appeals and redress processes and outcomes, including, in particular, provision of timely, accurate and complete information relevant to cases

"[T]echnical documentation ... shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date [and] drawn up in such a way [as] to demonstrate that the high-risk AI system complies with the requirements set out in [Arts. 8-15]" (Art. 11.1)

This makes a contribution to the provider's and user organisation's capability to participate in enforcement processes. However in itself it falls far short of an enforcement regime.

Under specified circumstances and conditions, "market surveillance authorities shall be granted access" by providers (Art. 64.1-.2). The market surveillance authority can require a provider to "withdraw the AI system from the market or to recall it ..." (Arts. 67-68).

Although the market surveillance authority has powers, the provisions create no scope for enforcement of any aspects of the EC Proposals by individuals, or by public interest advocacy organisations. A score of 0.4 is assigned.


Author Affiliations

Roger Clarke is Principal of Xamax Consultancy Pty Ltd, Canberra. He is also a Visiting Professor in Cyberspace Law & Policy at the University of N.S.W., and a Visiting Professor in the Research School of Computer Science at the Australian National University.



xamaxsmall.gif missing
The content and infrastructure for these community service pages are provided by Roger Clarke through his consultancy company, Xamax.

From the site's beginnings in August 1994 until February 2009, the infrastructure was provided by the Australian National University. During that time, the site accumulated close to 30 million hits. It passed 65 million in early 2021.

Sponsored by the Gallery, Bunhybee Grasslands, the extended Clarke Family, Knights of the Spatchcock and their drummer
Xamax Consultancy Pty Ltd
ACN: 002 360 456
78 Sidaway St, Chapman ACT 2611 AUSTRALIA
Tel: +61 2 6288 6916

Created: 14 July 2021 - Last Amended: 31 August 2021 by Roger Clarke - Site Last Verified: 15 February 2009
This document is at www.rogerclarke.com/DV/EC21-Ann4.html
Mail to Webmaster   -    © Xamax Consultancy Pty Ltd, 1995-2022   -    Privacy Policy