Skip to content
Regulatory

AI and compliance: finding balance

As AI transforms the financial industry, regulators are working to keep up, balancing innovation and consumer protection.

As new large language models (LLMs) and artificial intelligence (AI) become more popular and powerful, there will be continued focus by both market participants and regulators on the application to financial and compliance products. And regulators around the world are trying their best to keep up with these developments while ensuring that they strike a balance between fostering innovation and protecting consumers. In this blog, we will explore some of the developments that have and will come from the proliferation of LLMs and AI. 

One of the most significant developments in 2023 and beyond will be the increased adoption of AI-powered tools by financial institutions. These tools will handle a number of functions, such as monitor transactions in real-time, flag suspicious activity that may indicate money laundering or non-compliant advertising. By automating compliance, financial institutions can improve their risk management practices and also reduce their operational costs associated with running a properly functioning compliance function. This could reduce the number of enforcement actions that are often brought as the result of under-resourced compliance departments or simple human error. 

However, LLMs and AI applications can also present new challenges and new risks. For example, there could be inherent biases in the compliance tools that impact the efficacy. Several countries have introduced new regulations aimed at ensuring the responsible use of AI in the financial industry. For example, the European Union has introduced a regulatory framework for AI that requires companies to provide transparency about how their AI systems make decisions and focuses on rules regarding human oversight, data quality, and transparency, among other things.1 This is particularly important in the financial industry, where decisions made by AI systems can have significant consequences for consumers.

As another example of regulators looking to better understand and appropriately regulate new technologies, the Monetary Authority of Singapore (MAS) has a regulatory “sandbox” that would allow AI-powered financial services to be tested.2 The “sandbox” allows fintech companies to test new AI-powered products and services in a controlled environment, without risk of an enforcement action from regulators. In fact, this approach allows for feedback from the regulators as new products are brought to market. This approach enables companies to push for innovation while also ensuring that new products and services meet regulatory requirements.

Regulators are not the only potential source of guiderails for LLMs and AI. Industry players are working to develop their own ethical guidelines for the use of AI. For example, the Institute of Electrical and Electronics Engineers (IEEE) has developed a set of principles for autonomous and intelligent systems (i.e. AI) that include transparency, accountability, and fairness.3 

In conclusion, 2023 marks an important year for the intersection of AI and financial regulation. As AI becomes more powerful and continues to transform the financial industry, regulators around the world are working to keep up with these developments, striking a balance between fostering innovation and protecting consumers. From the rise of AI-powered compliance tools to the introduction of new regulations for AI, the developments of this year demonstrate the importance of measured regulation as it relates to AI and financial products and compliance.

1. Regulation (EU) 2021/695 of the European Parliament and of the Council of 28 April 2021 on a European approach for Artificial Intelligence. 

2. Fintech Regulatory Sandbox Guidelines, Monetary Authority of Singapore, January 1, 2022; available at https://www.mas.gov.sg/-/media/mas-media-library/development/regulatory-sandbox/sandbox/fintech-regulatory-sandbox-guidelines-jan-2022.pdf

3. See, for example: IEE Standards Association, IEEE Standard for Transparency of Autonomous Systems, IEEE 7001-2021; available at https://ieeexplore.ieee.org/document/9726144

The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity does not assume any duty to update any of the information. Fidelity and any other third parties are independent entities and not affiliated. Mentioning them does not suggest a recommendation or endorsement by Fidelity.

1086825.1.0

Mark Roszak

Regulatory & Compliance Advisor to Saifr
Mark started his career in financial services regulatory roles in Washington DC, working both for the Financial Industry Regulatory Authority and as an Associate at K&L Gates. He then continued his career working with legal tech and fintech startups based out of San Francisco. Mark now runs 1121 Law, a boutique fintech law firm focusing on providing corporate and regulatory counseling to both capital allocators and operators.

Check out our latest blogs

The role of AI in anti-money laundering: innovations and application

The role of AI in anti-money laundering: innovations and application

AI can transform AML efforts and safeguard the global financial system by enhancing detection accuracy and reducing false positives.

Marketing regulatory principles transcend products and jurisdictions

Marketing regulatory principles transcend products and jurisdictions

From securities to life insurance, marketing regulations align across states and products and focus on promoting truthfulness and transpare...

Responsible innovation for AML/KYC compliance—have the courage to explore AI

Responsible innovation for AML/KYC compliance—have the courage to explore AI

Don't wait for an enforcement order or negative reputation event to update processes. Consider integrating AI in AML/KYC compliance program...