Skip to content
Regulatory

What the US can learn from the EU’s first significant act on AI

Following the precedent set by the EU AI Act, US compliance officials should consider a risk-based approach to balance innovation and protection.

In early December of 2023, the European Parliament reached a provisional agreement on the terms of their very first step in regulating artificial intelligence (AI). The EU AI Act is a sweeping and robust law proposed by the European Parliament that takes a risk-based approach to regulating AI. The parliament and the EU Council have reached an agreement on the AI Act, but the EU AI Act has not been formally adopted. Once the act is adopted by both the EU Parliament and Council, then most of its requirements will become binding law for all member states within two years.

While the European Parliament has not finalized nor passed the AI Act, the current draft of the act nevertheless presents an interesting approach to the regulation of AI from which US compliance officials could learn a great deal. The EU AI Act’s risk-based approach categorizes AI systems into varying levels of risk depending on their potential to impact the fundamental rights of EU citizens. The categories are as follows: limited risk systems, general purpose or generative systems, high-risk systems, and unacceptable risk systems.

Overview of risk categorizations

The risk assessments and categorizations under the EU AI Act are in place to help protect the rights of consumers with special attention to the rights of children. For example, under the AI Act, AI systems that the EU deems “limited risk” would have to comply with basic transparency requirements that make users aware they are interacting with an AI and help users make informed decisions with their usage of the model. An example for a limited risk AI would be a generative model that makes fake audios or videos of politicians saying things they have never said.

The next level includes general purpose or generative AI systems. These systems will see less strict regulation than those in higher risk levels and would have to comply with several transparency requirements, such as disclosing that content is AI generated and publishing summaries of copyrighted data. General purpose models include more robust generative models such as Chat GPT. Generative models that the EU considers high-impact would be subject to thorough evaluations.

More powerful AI systems that the Act deems high-risk and high impact are subject to more strict regulations. High-risk systems range from AI in products such as cars and medical devices, to AIs used in education, critical infrastructure, legal interpretation, and employment. The AI Act would subject these high-risk systems to stricter regulation through regular assessments throughout their lifespan. Beyond high-risk systems, the act outright bans what it deems unacceptable risk systems. These are systems that threaten people’s rights and livelihood directly, such as cognitive manipulation of vulnerable groups through voice activated toys that encourage dangerous behavior or social scoring systems that classify people based on their actions.

Advantages of risk-based AI regulation

A risk-based approach is especially effective for developing a safe AI future. The risk-based approach allows the systematic identification and restriction of powerful and potentially overreaching AI models that could infringe upon EU citizens’ rights. Such an approach may assuage public concern over the misuse of AI to manipulate people or infringe on their privacy.

Yet, while the EU will regulate high-risk and unacceptable-risk systems severely, the risk-based approach does allow for advancements. Since the EU subjects weaker general-purpose models to more loose restrictions than high-risk models, this approach can allow AI models to more easily experiment and advance organically. The EU focuses compliance burdens where there is the greatest need for regulationhigh-risk systems.

In contrast, a general regulation model that does not take the varying degrees of risk each model may impose into account would likely stifle advancement by placing heavy restrictions on all AI models regardless of their effect. Alternatively, such broad and universal regulation could fail to protect the rights of individuals by not sufficiently restricting more powerful and harmful models.

Takeaways for US compliance officials

Through this Act, the EU sets a strong precedent on effective regulation of AI. Taking a step towards uniformity and a risk-based approach is a precedent that the United States should consider following. Yet EU officials have not hammered out the particulars of the EU AI Act as the act itself has not yet passed. The devil may still be in the details! Further, how compliance officials implement a risk-based approach is consequential for the impact of this Act. If compliance officials implement a risk-based approach incorrectly, then it may lose its effectiveness.

Download the ebook | AI insights survey: Adopters, skeptics, and why it matters.

Which threat level each AI system should slot into is not necessarily an objective discussion. The officials in the EU AI Act trilogues have spent months considering which AI applications to prohibit and restrict and only now have they come to a tentative agreement. US compliance officials will likely have differing opinions on what would count as unacceptable risk or high-risk. Further, while the EU act generally has four threat levels, that does not mean that the US should have four as well. A higher number of levels could allow for greater nuance in regulation. For instance, in the EU AI Act’s high-risk level, both AI for use in cars and AI in education are subject to similar rules. Yet, one could argue those two instances of AI usage vary significantly in their effect on the public.

It is here where the EU AI Act perhaps does not go far enough. US compliance officials may wish to consider taking a more nuanced risk-based approach both to avoid hindering the advancement of AI too much and protecting citizens and industries where they really need protection.

When applying a risk-based regulatory approach to AI, compliance officials should aim to make it as future-proof as possible. Part of the goal of a risk-based approach is encouraging technological advancement; but if compliance officials sweep too many different kinds of AI systems into one category, then as they develop and their usage evolves, the threats they pose could vary widely from one another. Officials could address this with a greater number of nuanced risk levels as mentioned above, or by assessing each model on a regular basis for the number and nature of the threats they pose, which the EU AI Act does seem to do for high-risk systems. Subjecting lower-risk systems to similar, if maybe less frequent, assessments could help catch problematic systems that fall through the cracks.

Conclusion

The EU AI Act, while not finalized, is setting a new precedent for the regulation of AI and US compliance officials should take note. The risk-based approach adopted by the EU AI Act appears robust and considerate enough not to hinder the progress of AI too significantly. It may be in the best interest of US compliance officials to adopt a similar and perhaps even more nuanced risk-based approach both to thoroughly protect the rights of individuals and also allow for technological advancement. Yet the US should also consider reassessing the risk-level of AI systems on a regular basis. It is important that compliance officials keep a finger on the heartbeat of AI’s technological advancement in order to help protect the rights of individuals.

 

 

The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity does not assume any duty to update any of the information.

1127899.1.0

Jasmin Sethi

Regulatory & Compliance Advisor to Saifr
Jasmin is a lawyer, economist, entrepreneur, and thought leader with over a decade of experience in the financial industry. She founded Sethi Clarity Advisers in 2018 and is a consultant to Saifr. Jasmin was a Vice President in BlackRock’s Legal and Compliance group, Special Counsel at the Securities and Exchange Commission’s Division of Trading and Markets in the Office of Market Supervision, and was an adjunct professor of law at Georgetown University Law Center and SEC University. Earlier in her career, Jasmin was an Associate at Mayer Brown in Washington, D.C., where she practiced general litigation. Jasmin received her JD, PhD in economics, and undergraduate degrees from Harvard University. As a Fulbright Scholar in 2001, she earned an MSc in Economics from the London School of Economics and Political Science.

Check out our latest blogs

Compliance concerns: Workloads, knowledge gaps, and separate systems

Compliance concerns: Workloads, knowledge gaps, and separate systems

A Saifr survey revealed the top concerns of compliance departments in financial services: workloads, knowledge gaps, and internal systems.

Q1 2024 regulatory roundup: all things AI and enforcement

Q1 2024 regulatory roundup: all things AI and enforcement

Q1 saw continued regulatory focus on artificial intelligence, with recent enforcements indicating AI is a regulatory priority.

Lessons learned in AI risk management from Compliance Week 2024

Lessons learned in AI risk management from Compliance Week 2024

The hot topic at Compliance Week 2024 was AI risk management. To combat AI risks, compliance officers can implement frameworks, policies, a...