Skip to content
Regulatory

AI regulations in the EU and US: a tale of two views

The EU and US have different approaches to regulating AI. However, they are working together to develop international AI standards and improve governance.

The European Union (EU) and the United States (US) have the potential to influence the global governance of artificial intelligence (AI). The EU and the US approaches to AI risk mitigation are similar in that they both seek to improve regulatory oversight and enable broader transatlantic cooperation. While their objectives may be similar, each has their own response to AI and all its challenges.

The EU and US strategies have a conceptual alignment on a risk-based approach, agree on key principles of trustworthy AI, and endorse a key role for international standards. However, the specifics of these AI risk management regimes have more differences than similarities. Regarding many specific AI applications, especially those related to socioeconomic processes and online platforms, the EU and US are on a potential path to significant misalignment. Let’s explore.

The EU approach

In June of 2018, the EU established a group of 52 AI experts to conduct an independent study1 to establish guidelines for trustworthy AI. The report’s executive summary listed three foundational requirements that should be met throughout an AI system’s life cycle:

  1. It should be lawful, complying with all applicable laws and regulations,
  2. It should be ethical, ensuring adherence to ethical principles and values, and
  3. It should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm.

The group then provided seven considerations that expand on the foundational requirements, focusing on ethics and robustness more than legality. (Read more here in Jasmin Sethi’s blog.)

In addition, the EU’s approach to managing the risks of artificial intelligence is characterized by a legislation tailored to specific digital environments. The EU plans to place new requirements on high-risk AI in socioeconomic processes, the government use of AI, and regulated consumer products that use AI systems. Other EU legislation focuses on more public transparency and influence over the design of AI systems in social media and e-commerce.

The US approach

The US approach to AI risk management is different from the EU in that the US has invested in non-regulatory infrastructure, such as a new AI risk management focus, evaluations of software, and extensive funding of AI research. The US focus is more centralized on the business purpose and advantages of AI versus the regulatory impact.

Ebook download | AI insights survey: Adopters, skeptics, and why it matters.

In September 2023, several industry titans such as Elon Musk, Mark Zuckerberg, Sam Altman, and other C-level tech executives met with US senators to discuss the future of AI. Everyone agreed that the US government should have a bigger role in regulation and governance of AI. Musk warned senators on Capitol Hill that AI poses a “civilization risk” to governments and society. Google CEO Sundar Pichai commented that “AI is too important not to regulate—and too important not to regulate well.” In addition, various financial firms are arguing for less intervention on one hand but more transparency in terms of regulatory concerns on the other.

The US government has been trying to get its arms around a formal regulation of AI. In addition, the SEC has stepped up its game in terms of regulating AI with its latest proposed rule on conflicts of interest on predictive data analytics. (Read Mark Roszak's blog on the subject here.) This rule targets broker-dealers and investment advisor firms and addresses potential conflicts of interest associated with their use of predictive data analytics, including AI, and similar technologies that directly interact with investors. Technologies targeted by this rule include chatbots that use AI to predict, guide, forecast, or direct investment-related behaviors or outcomes. The rule aims to prevent firms from placing their interests ahead of investors’ interests.

Working together

The US and EU as well as other countries have seen the value and need to work together. As a result, the EU-US Trade and Technology Council was established in June 2021 at the EU Summit by President Joe Biden, European Commission President Ursula von der Leyen, and European Council President Charles Michel. The Council’s goal was to develop a collective understanding of metrics and methodologies for trustworthy AI. Through these negotiations, the EU and US have agreed to work collaboratively on international AI standards, while also jointly studying emerging risks of AI and applications of new AI technologies.2 In addition, the G7 has launched the “Hiroshima AI process.” World leaders expect to present results of discussions by the end of 2023. The Organization for Economic Co-operation and Development (OECD) has developed AI principles, and the United Nations has proposed a new UN AI advisory body to better include views of developing countries.

I think more can be done to further the EU-US alignment, while also improving each country’s AI governance regime. Specifically:

  • The US should formalize a federal agency or blend of existing agencies in order to create an AI regulatory plan designed to have strategic AI governance and oversight.
  • The US and EU should focus their work on shared information and learnings between the two countries to help create more standardization of how to apply AI algorithms and rules that govern how to use them.
  • As regulations evolve and more countries consider formalization of AI regulation, more standardization, globalization, and supervision of the use of AI should result.

More collaboration between other countries should be crucial, as these governments are implementing policies that will be foundational to the democratic governance of AI.

Curious who's using AI and who isn't? Check out our ebook, AI insights survey: Adopters, skeptics, and why it matters.

 

1. European Commission Press Release June 2018

2. European Commission Press Release October 2023

The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity does not assume any duty to update any of the information. Fidelity and any other third parties are independent entities and not affiliated. Mentioning them does not suggest a recommendation or endorsement by Fidelity.

1109981.1.0

Allison Lagosh

Head of Compliance
Allison has extensive experience in financial services legal, compliance, risk, and marketing compliance teams, working on regulatory matters, disclosure design, and data validation and conversions. She has previously held management consultant, risk management, controls governance, and compliance positions at large financial firms.

Check out our latest blogs

The state of AI—agentic—and where I see AI headed from here

The state of AI—agentic—and where I see AI headed from here

Learn the evolution of AI from large language models to agentic AI, emphasizing planning, tools, and regulatory compliance for solving busi...

A roadmap for implementing AML compliance for IAs

A roadmap for implementing AML compliance for IAs

My short guide to implementing AML compliance for IAs covers key components, regulatory requirements, and practical steps for effective pro...

Saifr’s mission: Make AI safer

Saifr’s mission: Make AI safer

Here's how our collaboration with Microsoft aims to enhance compliance in the financial services industry through innovative AI models.