Skip to content
Regulatory

AI and compliance: finding balance

As AI transforms the financial industry, regulators are working to keep up, balancing innovation and consumer protection.

As new large language models (LLMs) and artificial intelligence (AI) become more popular and powerful, there will be continued focus by both market participants and regulators on the application to financial and compliance products. And regulators around the world are trying their best to keep up with these developments while ensuring that they strike a balance between fostering innovation and protecting consumers. In this blog, we will explore some of the developments that have and will come from the proliferation of LLMs and AI. 

One of the most significant developments in 2023 and beyond will be the increased adoption of AI-powered tools by financial institutions. These tools will handle a number of functions, such as monitor transactions in real-time, flag suspicious activity that may indicate money laundering or non-compliant advertising. By automating compliance, financial institutions can improve their risk management practices and also reduce their operational costs associated with running a properly functioning compliance function. This could reduce the number of enforcement actions that are often brought as the result of under-resourced compliance departments or simple human error. 

However, LLMs and AI applications can also present new challenges and new risks. For example, there could be inherent biases in the compliance tools that impact the efficacy. Several countries have introduced new regulations aimed at ensuring the responsible use of AI in the financial industry. For example, the European Union has introduced a regulatory framework for AI that requires companies to provide transparency about how their AI systems make decisions and focuses on rules regarding human oversight, data quality, and transparency, among other things.1 This is particularly important in the financial industry, where decisions made by AI systems can have significant consequences for consumers.

As another example of regulators looking to better understand and appropriately regulate new technologies, the Monetary Authority of Singapore (MAS) has a regulatory “sandbox” that would allow AI-powered financial services to be tested.2 The “sandbox” allows fintech companies to test new AI-powered products and services in a controlled environment, without risk of an enforcement action from regulators. In fact, this approach allows for feedback from the regulators as new products are brought to market. This approach enables companies to push for innovation while also ensuring that new products and services meet regulatory requirements.

Regulators are not the only potential source of guiderails for LLMs and AI. Industry players are working to develop their own ethical guidelines for the use of AI. For example, the Institute of Electrical and Electronics Engineers (IEEE) has developed a set of principles for autonomous and intelligent systems (i.e. AI) that include transparency, accountability, and fairness.3 

In conclusion, 2023 marks an important year for the intersection of AI and financial regulation. As AI becomes more powerful and continues to transform the financial industry, regulators around the world are working to keep up with these developments, striking a balance between fostering innovation and protecting consumers. From the rise of AI-powered compliance tools to the introduction of new regulations for AI, the developments of this year demonstrate the importance of measured regulation as it relates to AI and financial products and compliance.

1. Regulation (EU) 2021/695 of the European Parliament and of the Council of 28 April 2021 on a European approach for Artificial Intelligence. 

2. Fintech Regulatory Sandbox Guidelines, Monetary Authority of Singapore, January 1, 2022; available at https://www.mas.gov.sg/-/media/mas-media-library/development/regulatory-sandbox/sandbox/fintech-regulatory-sandbox-guidelines-jan-2022.pdf

3. See, for example: IEE Standards Association, IEEE Standard for Transparency of Autonomous Systems, IEEE 7001-2021; available at https://ieeexplore.ieee.org/document/9726144

The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity does not assume any duty to update any of the information. Fidelity and any other third parties are independent entities and not affiliated. Mentioning them does not suggest a recommendation or endorsement by Fidelity.

1086825.1.0

Mark Roszak

Regulatory & Compliance Advisor to Saifr
Mark started his career in financial services regulatory roles in Washington DC, working both for the Financial Industry Regulatory Authority and as an Associate at K&L Gates. He then continued his career working with legal tech and fintech startups based out of San Francisco. Mark now runs 1121 Law, a boutique fintech law firm focusing on providing corporate and regulatory counseling to both capital allocators and operators.

Check out our latest blogs

What Financial Advisors can learn from the SEC's Marketing Rule enforcement

What Financial Advisors can learn from the SEC's Marketing Rule enforcement

In enforcing the Marketing Rule, the SEC has focused on transparency and factuality regarding conflicts of interest, third-party ratings, a...

How AI-assisted entity resolution can help you reduce risk

How AI-assisted entity resolution can help you reduce risk

Learn how AI can enhance detection of bad actors, improve AML/KYC processes, and minimize false positives for your compliance team.

It’s time to take your AML compliance programs off autopilot

It’s time to take your AML compliance programs off autopilot

Financial criminals are turning to AI to exploit weak IT protocols and carry out cyber attacks—but firms can use AI tools to fight back.