Articles

EU AI Act Passes: Your AI Governance Preparation Guide

Guides
James Grieco
James Grieco
Mar 25, 2024
12
min read
EU AI Act Passes: Your AI Governance Preparation Guide

The European Union’s Artificial Intelligence Act is here, the world’s first comprehensive law to regulate AI. First introduced back in April 2021, the text of the law was tentatively agreed upon back in December 2023 after months of breakneck negotiations in response to the AI hysteria spawned by ChatGPT’s release. 

Now, in March 2023, the EU Parliament has officially voted to pass the AI Act into law, with the next step being its publication in the Official Journal of the EU sometime in early April. Once it enters the OJEU, we’ll be on a clear timeline for the law’s full effect, as the following approximate deadlines will apply:

  • October 2024 - Ban on AI systems within the “unacceptable risk” category
  • January 2025 - AI Act Codes of Conduct now apply
  • April 2025 - Governance rules and obligations for General Purpose AI (GPAI) become applicable
  • April 2026 - Start of application of the EU AI Act for AI systems

The full scope of the law will not kick in until two years after its official adoption, but as you can see, companies are already on the clock to get their AI governance operations up and running. 

With that reality, we’re here to break down the key aspects of the law, what compliance will look like, how other AI regulations globally might play out, and what organizations should do to upgrade data governance programs to encapsulate AI governance.  

Risk and the AI Act Tiered Approach

The end text of the AI Act is dense, vague in places, and according to some, unenforceable in practice. 

For starters, this is how Article 3 defines AI systems, "a machine-based system that is designed to operate with varying degrees of autonomy and that, once deployed, can demonstrate adaptability and infer for explicit or implicit goals from the inputs it receives how to generate outputs such as predictions, content, recommendations or decisions that can affect physical or virtual environments."

While that is straightforward and borrows from pre-existing definitions used in European lawmaking bodies, tackling how to regulate AI became a major sticking point in negotiations, with the final compromise tied to a tiered approach pursuant to risk levels inherent to different use cases of AI.

The four categories of risk are:

  • Unacceptable risk: AI systems that pose a clear threat to the safety, livelihoods, and rights of people, such as social scoring and dark patterns. These systems have an outright ban.
  • High risk: AI systems that include specific high risk use cases, factoring in either the severity of the possible harm and how likely it is to occur. These systems include resume-scanning or housing application tools and carry mandatory conformity assessments.
  • Limited risk: AI systems that pose limited risk to individuals, such as chatbots. These systems will carry transparency obligations so users know AI is present.
  • Minimal/no risk: AI systems that pose no or negligible risk, such as AI-enabled video games or spam filters. Per the EU Commission, the majority of the AI systems will fall within this category and carry no additional compliance obligations. 

As the AI Act will need to be fluid to react to the speed at which technology evolves, many of these definitions and use cases are subject to amendment. This is compounded by the fact that the law has focused on underlying technologies rather than use cases themselves, which has left many unsatisfied with the bill’s protections.

Of particular note is the public debate around the law not outright banning mass public surveillance, instead opting merely for limitations on live facial recognition usage. This has many, such as Amnesty International, worried about government surveillance.

Likewise, expect several technologies to shift up the pyramid as usage shows how harmful they can be. For example, deepfake technology currently falls under the “limited risk” category, but the potential for individual harm–especially for women–or misinformation on a wide scale will eventually force a decision to be made on the technology.

How the AI Act Applies & Stacks up Globally

The EU has gone out of its way to take the stance of the adult in the room with various technologies, explaining why it often jumps at the chance to pass regulation before anyone else. Similar to the GDPR, the world’s first fully comprehensive data privacy & protection regulation, there is value in being first, as an initially rocky rollout of the GDPR eventually gave way to clearer parameters and imitation data privacy laws around the globe, establishing the EU as the guiding force on the issue.

The EU is hoping that cycle repeats itself here with AI regulation, although generative AI is set to be multitudes more complex than data privacy–especially considering data privacy itself is a major tenet of AI governance. 

As was the case with GDPR, the EU AI Act applies globally as its applicability threshold simply covers EU markets and citizens. This means that any company selling to even a single citizen of a European Union country must comply, which has set off a chain reaction of global enterprises beginning to prioritize AI governance.

While the drive to comply is admirable and a step in the right direction, it presents a complicated undertaking as the AI Act seems imperfect.

Kai Zenner, a Digital Policy Advisor for the European Parliament and a key player in the creation of the AI Act, noted its conceptual flaws, “[Conceptually] … mixing product safety and fundamental rights as well as using New Legislative Framework concepts such as ‘substantial modification’ is not working for evolving AI systems.”

Granted, this was an unprecedented law that took three years and thousands of contributors to hammer out, inevitably relying on innumerable compromises to cross the finish line. Still, the end result being imperfect will lead to conflicting interpretations of the law the same way various EU member states have enforced GDPR differently. Zenner writes, “the AI Act is creating an overcomplicated governance system … As a result, Member States will designate very different national competent authorities, which will - despite the Union Safeguard Procedure in Article 66 - lead to very different interpretations and enforcement activities.”

Despite these criticisms, the EU approach is the favorite to emerge as the standard bearer, given the activity–or lack thereof–on AI globally. 

India is currently choosing to let generative AI developers self-regulate and label their own products, as updates out of the country recently dropped the requirement to obtain government permission to make products available to users within India. 

The UK is also taking a looser approach than the EU despite sharing a risk-based framework. Within the UK, it will be up to regulators to assess AI-specific risks as they see fit within their area of expertise, guided by the principles of safety and transparency to establish sector-specific AI regulators.

Much of what the US will do on AI regulation remains stuck at the posturing level, with guidelines and endless commentary on how to approach AI but little in the way of legislative progress. With how data privacy has unfolded within the country, expect a decentralized approach that many countries will likely not look to emulate.

AI Act Requirements and Compliance 

Part of what obfuscates the full scope of the AI Act currently is the vague or missing definitions around key terms and ideas. For example, the tiered approach relies on the risk and harms inherent to different systems, but ‘harm’ is not defined. 

Likewise, it is not always clear the distinction between AI system providers, deployers, distributors, and importers, key distinctions given the varying scope of obligations each has (and amplified by the fact that the AI Act differentiates general-purpose AI models and general-purpose AI systems, since both can be 'provided', but only a 'system' can be 'deployed').

The provider-deployer distinction is as relevant to this law as the controller-processor distinction is to GDPR, but the roles here are much more malleable, as developers of AI are not always the providers, and deployers of AI can become providers over time, as per Article 25 noting this, such as when a deployer makes a substantial modification to a high-risk AI system or repurposes a low-risk one for a high-risk purpose. 

A simple use case of this would be an organization integrating ChatGPT in its platform via an API, which would in turn render the services as an “AI System” and the organization as a “Provider.” Further Provider requirements would then depend on which risk level the system is defined as.

Regardless, with these rules in place, for an organization creating or using AI in any capacity, this distinction must be monitored vigorously. 

Providers of high-risk systems have these baseline obligations (along with a much wider scope of overall obligations):

  • Risk management and quality management systems
  • Only are permitted to use quality datasets
  • Transparency obligations to Deployers and end users
  • Must maintain comprehensive technical documentation and logs
  • Human oversight AND automatic event recording
  • Complete conformity assessments and declaration
  • Ensure accessibility requirements

Deployer obligations include:

  • Using systems in accordance with instructions from the Provider
  • Conduct DPIAs and Fundamental Rights Impact Assessments
  • Human oversight AND incident reporting 
  • Verify that input data is relevant and representative
  • Retain logs
  • Notify end users of the use of AI systems where necessary 

Exceptions that apply are:

  • AI models or systems used solely for the purpose of scientific research
  • Use of AI systems for purely household activities
  • AI systems used exclusively for defense/military purposes

AI Act Enforcement and Penalties

Companies found to be deploying prohibited AI systems face a fine of up to €35,000,000 or 7% of annual worldwide turnover, whichever is higher. 

Companies found to be non-compliant with specific obligations within the regulation face a fine of €15,000,000 or 3% of annual worldwide turnover, whichever is higher. 

Here is how that figure stacks up against other EU laws:

  • GDPR maximum fine is 3% of annual worldwide turnover
  • Digital Services Act maximum fine is 6% of annual worldwide turnover
  • Digital Markets Act maximum fine is 10% of annual worldwide turnover

As for the enforcement bodies, they are still being set up, but at the EU level an AI Office and AI Board will be established, with the European Data Protection Board acting as an overarching supervisory body. EU member states will need to set up notifying authorities and market surveillance authorities to carry out duties locally. 

How to Prepare Your AI Governance

Much of the AI Act and how to interpret it are still up in the air, but the law presents some of the most monumental technical and legal challenges organizations will ever face, so taking the time to strengthen your privacy program, track down AI-associated data within your organization, and prepare for AI governance is a necessity if you don’t want to get left behind in both compliance and competition. 

The philosophical baseline for the AI Act and any AI regulation is to establish visibility, awareness, and accountability. 

Here are the steps to accomplish that, and how MineOS’s new AI Asset Discovery and Risk Assessment module helps.

1. Do an AI mapping exercise

AI presents data mapping with a new wrinkle, but privacy best practices apply to AI governance as well as data privacy. Discover and document your complete inventory of AI systems in a centralized place to establish complete visibility.

2. Investigate AI systems

Scan your AI systems to sketch out the nature of AI systems, their intended purposes, third party vendors with access, and generated outputs. (MineO offers two scanning options: Smart Data Sampling or a Full Scan, depending on how deeply you need to investigate a data system.)

3. Get buy-in from the organization

AI governance is going to be an additional layer stacked on top of pre-existing privacy programs, but a supercharged layer that needs extensive resources. Once you have the data you need on AI within your organization, ensure leadership is on the same page about the need to prioritize AI governance alongside product development, and every department understands their role in AI compliance.

(Reminder: this is where an intuitive and user-friendly interface will help align everyone quickly on their roles. Very few have time to become experts on AI and compliance, which is why the governance solution you’re using matters.)

4. Comb through existing compliance mechanisms

Although the AI Act introduces new compliance requirements, many are variations of existing data privacy requirements and templates. Do not fret about building a new compliance vertical from scratch! Go through your privacy program to identify existing data compliance practices & documentation

5. Conduct an AI Act gap analysis

Once you can answer these questions, you can round out your approach to AI governance through a gap analysis as to what remains lacking:

  • What is the purpose of each AI model/system?
  • What type of data does the tool use/collect?
  • What does the model/system do?
  • Who has access and what data is accessible?

6. Complete an AI Impact Assessment

As an end output for any regulation, being able to complete an AI assessment will be vital for enterprises once the AI Act becomes enforceable. 

Want to see how MineOS tackles the task more thoroughly than other solutions? Book a demo to see our new AI governance module in action.