On December 8th, 2023, it was announced that the EU had reached a tentative agreement on the terms of the AI Act, which if approved by member countries and parliament, has the potential to lead the world by setting a precedent for the regulation of artificial intelligence (AI). These regulations, as proposed, appear to take a risk-based approach to AI and will likely impact AI in all industries, including financial services.
The EU’s risk-based structure
Given AI’s potential myriad use cases, a risk-based approach makes sense. A one-size-fits-all regulation scheme is unlikely to work. Wisely, the EU categorized risks into minimal, high, unacceptable, and added transparency.
According to the EU’s proposal, AI-enabled tools with minimal risk, like those sorting our emails, get a free pass (no regulatory oversight). Those with high risk impacting safety, health, etc. will need to comply with requirements “including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, human oversight, and a high level of robustness and cybersecurity.” Any AI systems that threaten human rights pose unacceptable risks and therefore will be banned. And finally, the proposed rules state that there should be full transparency where users are made fully aware when they are interacting with AI.
There are of course more details in the EU’s proposed rules, a few notable exceptions, and the creation of a new European AI Office; but this tiered structure forms the foundation and provides a model to help those implementing AI move forward.
Regulations enable healthy markets
Regulations are nothing new to the financial services industry, which must comply with rules and regulations by the SEC, FINRA, and other regulatory bodies. Given its importance in providing the backbone of economic systems, regulatory oversight is important to help build trust and help markets function properly.
If you take a step back and imagine there were no regulations, what would you need to feel comfortable buying a financial product? You would want to know that what you are being told is 100% truthful, that the product will work as advertised. You would want full transparency into the risks, returns, and any conflicts of interest the provider might have. You would want to know that the transaction is fair and that no one is front running or getting a better execution. And, you would want elements of security like knowing your assets are secure, there are accurate records, etc. SEC and FINRA regulations help to provide the environment you need to safely and confidently engage.
Just as financial regulations provide needed guardrails to help markets work, new regulations for AI should likewise help the nascent industry to grow while minimizing potential negative impacts.
New technology needs proper regulations
At its most basic, AI is just another technology integrating into financial services. Like any new technology, there are potential risks (bias, privacy, misinformation) and rewards (efficiency, cost savings, accuracy).
One example of a new technology to enter financial services over the past decade is cryptocurrency and blockchain. Many cryptocurrencies operate on decentralized networks, enabling secure peer-to-peer transactions, mostly without the need for third-party intermediaries, through the use of cryptography and distributed ledger technology.
This technology has been subject to regulatory uncertainty, which many would argue was one of the factors likely contributing to cryptocurrencies’ volatility. Yet, the underlying blockchain technology shows potential to reshape many industries by aiming to enhance transparency, efficiency, and user control.
Without regulations, some technologies struggle to get off the ground safely, especially when they’re being applied to another regulated industry.
AI guardrails can help
With some lessons learned from crypto, regulated AI can hopefully be more safely integrated into financial services.
AI can be used to help comply with many financial regulations such as those related to trading, Know Your Customer, fraud detection, money laundering, insider trading, content development, etc. And very often it won’t just be one AI but layers of AI that are needed to do the job.
For example, a firm like Saifr aims to act as an extra AI layer providing guardrails to help ensure that outputs from other AI systems used to develop content for marketing materials, chatbots, etc. comply with the regulations. Large language models (LLMs) might excel at analyzing customer data and creating personalized, engaging content, but that content isn’t likely to meet with regulations for communicating with the public, since the LLM doesn’t know and understand the regulations. Saifr’s additional AI layer aims to review the content to flag possible compliance risks and suggests corrections to help humans bring the text and images into compliance.
The EU’s leadership in AI regulations is welcome and will hopefully lead other countries to follow suit. Regulations, as shown in financial services, can help to build the trust needed for innovation and growth. And AI itself can even be used to help meet regulations.
The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity does not assume any duty to update any of the information. Fidelity and any other third parties are independent entities and not affiliated. Mentioning them does not suggest a recommendation or endorsement by Fidelity.