EU’s proposal for an AI Act – the world’s first initiative on regulating Artificial Intelligence

Against the backdrop of EU’s soft-law initiatives providing guidance “on fostering and securing ethical and robust AI”, such as the European Commission’s HLEG’s “Ethics Guidelines for Trustworthy AI”,[1] in April 2021 the European Union commenced its formal deliberations on the world’s first binding legal framework for AI, introducing the proposal for a Regulation on AI (“AI Act”),[2] designed to govern the development, placing on the market, and use of this fast-evolving and massively disruptive technology.

After more than two years of intense discussions within the Union’s legislative bodies on the pivotal provisions of the EU’s proposed AI Act (heceforth AIA), a fifth round of trilogue negotiations kicked off on the 24th of October in the EU Council, aimed at reaching a political consensus on the most disputed areas of the forthcoming regulation.

As the Act reaches the final stages of the legislative procedure, it is of outmost importance for everyone involved with AI -from researchers to end-users- to stay informed, understand the key aspects of its provisions, and prepare for its enactment. Here’s an overview of the AI Act’s main themes and current state.

Risk-based approach

AIA is built around a human-centric, risk-based approach. AI systems are classified according to the level of risk they pose for people’s fundamental rights, livelihoods, or safety, and the level of legal intervention is tailored to the corresponding level of risk. Specific AI applications deemed to pose unacceptable risk like social scoring on behalf of public authorities, exploitation of physical or mental disability, and real-time biometric identification systems in public spaces (with few exceptions) are banned. On the other hand, applications such as video games and spam filters that present low or minimal risk are subject to no legal obligations, and limited risk applications, like chatbots and deepfakes, are subject to mere transparency obligations to ensure the users are aware of AI interaction. 

The regulatory core of AIA thus concerns systems that are expected to have a significant impact on people’s safety or fundamental rights, which are subject to strict provisions. As opposed to AIA as a whole which applies to all AI systems, Title III on high-risk AI systems applies specifically to systems falling into one of two categories: AI systems that are products or safety components of products already covered by certain Union health and safety harmonization legislation (such as toys, cars, machinery, or medical devices); and AI systems, which are to be used in eight specific areas, including the management and operation of critical infrastructure, education and vocational training, worker management and access to self-employment, access to essential private and public services, and law enforcement. Taking into account the pace of on-going developments and the related uncertainty, these critical areas of application remain open to update via new inclusions in an attempt to future-proof the regulation.

Requirements for high-risk AI systems

AIA establishes an extensive list of essential requirements and corresponding obligations, most of which fall on the providers. The requirements that high-risk systems must adhere to concern aspects such as the quality of the data used to train the system; its technical robustness and cybersecurity; its transparency and traceability; the creation of adequate and continuously updated risk assessment and management frameworks; proper technical documentation; as well as the possibility of effective human oversight to minimize risks to health, safety and fundamental rights. Compliance with the relevant requirements will be displayed via providers’ prior conformityself-assessment”.

Standards

Importantly, the requirements (vaguely stated in AIA) are left to European Standardization Organizations to further specify via the development of harmonized standards. Providers can then choose to either interpret the requirements themselves or follow the standards, enjoying a presumption of conformity. Given the latter option is, not only cheaper, but also practically and legally a much “safer bet”, its safe to say that standardization is “where the real rule-making will occur”[4]. This has led to controversy and raised concerns of problematic and unjustified delegation of rule-making power, due to the lack of meaningful democratic oversight and effective stakeholder’s participation in the standardization process.[4],[5]

Enforcement and sanctions

With infringement penalties even greater than GDPR’s (up to €30 million or 6 % of the total worldwide annual turnover), (non-)compliance with AIA will be a game-changer. However, relying primarily on internal self-assessment by high-risk-system-providers and harmonized standards developed by private bodies, AIA is criticized for not establishing effective enforcement structures.[4]

Regulation at the cost of innovation? 

The well-known tension between innovation and overly stringent regulation is highlighted when it comes to AI regulation. To prevent stagnation of innovation, AIA (amongst other measures) suggests creating a regulatory sandbox, which would allow companies to experiment with developing and testing AI applications in a controlled environment without the full range of regulatory obligations like GDPR. However, the impact that the proposed regulation will have on investment and development of AI in Europe remains a highly controversial issue, with the Centre for Data Innovation predicting a reduction of investments by almost 20% and a €31 billion cost for the European economy over the next five years.

Where we stand 

The Council agreed the EU Member States’ general position in December 2021, and the Parliament voted on its position in June 2023. With so-called trialogue negotiations between EU’s lawmakers coming to a head, AIA is expected to be finalized and ready to vote by the end of the year, with substantial amendments to the Commission’s original proposal “including revising the definition of AI systems, broadening the list of prohibited AI systems, and imposing obligations on general-purpose AI and generative AI models such as ChatGPT”.[3] A major point of persisting contention concerns the prohibition of real-time biometric identification systems in public spaces (such as AI-powered facial recognition) with the Council pushing for exceptions for law enforcement and national security and the Parliament advocating for a stricter approach to the privacy-security tradeoff. 

Concluding Remarks

Despite a wide range of criticisms and persistent worries regarding its omissions, specificity, and practicability, AIA is rightly celebrated as humanity’s first attempt to legislatively secure that AI is trustworthy and put to the service of the people. And the timing seems right. AI’s trajectory of growth and integration, the looming threat of an arms race, as well as an ever-increasing number of AI-harm incidents -from glitches costing firms like Google hundreds of millions to Australia’s robodebt failure and Italy’s GPT ban due to privacy concerns- ignite the public concern and precipitate regulatory interventions. As more and more countries are following with AI regulation, EU’s forthcoming AI Act is expected to serve as a major point of reference.

Authors:

Nikos Koulierakis, Christina Nanou

Eunomia Limited (EUNL)

References

[1] High-Level Expert Group, Ethics Guidelines for Trustworthy AI, 2019.

[2] https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206 

[3] Madiega T, Artificial Intelligence Act, EPRS, June 2023

[4] Ebers M., and others, The European Commission’s Proposal for an Artificial Intelligence Act—A Critical Assessment by Members of the Robotics and AI Law Society (RAILS), J 4, no 4: 589-603, October 2021.

[5] Veale M., Zuiderveen Borgesius F., Demystifying the draft EU AI Act, 22(4) Computer Law Review International, July 2021.