Skip to content
Compliance

Three takeaways from Compliance Week’s AI and Compliance Summit

Key topics in the FinServ compliance space include opportunities and challenges for professionals, generative AI supervision, and the US regulatory scene.

I recently attended Compliance Week’s AI and Compliance Summit in Boston where compliance professionals gathered to discuss the current and future states of AI compliance, ethics, regulation, and governance. Complexity was the foremost concern on the minds of the conference attendees. As AI grows increasingly complex, so too do the compliance concerns surrounding it. AI introduces new opportunities and challenges for compliance professionals seeking to establish AI policies, governance committees, and codes of ethics within their organizations.

Speakers discussed many concerns, risks, opportunities, and statements from regulatory authorities that compliance professionals should be aware of; but three key considerations stood out as insightful: compliance professionals are essential for and challenged by AI innovation; AI should be supervised as one would an intern; and the future of AI regulation is murky.

Let us dive deeper into each of these three areas.

1. Compliance professionals should brace for opportunities and challenges

AI innovation requires input from compliance professionals; and therefore, they should be at the table alongside data scientists in the development of AI models. Data scientists are typically more concerned with making sure a model functions as intended, but it is compliance professionals who help shape that underlying intention to address as many potential use case risks as possible. Application of a model is within the domain of compliance.

Risk monitoring is vital for AI innovation because companies should not be comfortable scaling AI solutions until developers address all risks and compliance professionals are able to assure the company that the model is safe. Compliance professionals may feel intimidated by the complexity of AI technology, but really AI may be simpler to understand than many think. Compliance professionals do not have to understand every calculation and line of code to assess whether a data set contains biases or if a model’s output is problematic.

With a clear goal, compliance professionals can help guide AI development through AI policies, codes of ethics, and risk assessments. Innovation is exciting, but it is important to mitigate bias, protect user privacy, and ensure models are as accurate as possible.

There is a clear need for compliance professionals in AI development, meaning AI presents opportunities for compliance professionals as well as challenges.

2. Compliance and other professionals should supervise generative AI tools as interns

Generative AI has many potential uses for streamlining workflows. It can help build outlines, draft articles, communications, emails, and more. However, generative AI usage also comes with at least four key risks that make overreliance on the technology unwise. These risks are hallucinations, i.e. when a model gives incorrect outputs, biases, copyright violations, and data privacy.

Hallucinations and biases are particularly concerning for generative AI model usage. Since generative AI is probabilistic in nature rather than deterministic, meaning it generates content based on what is “most likely” to address the initial prompt, generative AI is inherently unpredictable and almost guaranteed to include incorrect, irrelevant, or inaccurate information in its outputs at some point. As Vall Herard, CEO of Saifr, pointed out at the NSCP Conference in 2023, to help curb these risks it is important to treat AI as an intern rather than a full replacement for an employee’s work.

Ebook → AI usage survey: Adopters, skeptics, and why it matters.

By treating generative AI as an intern, users use outputs as drafts or outlines for a finished product. Just as a business would not trust an intern to produce content without supervision and review, any workers relying on generative AI should not trust the models completely and review and revise any content generated by the AI as needed.

3. The regulatory landscape for AI regulation in the US is murky

Currently, there is no nationwide AI regulation for organizations to follow. The National Institute of Standards and Technology within the Department of Commerce has created an extensive AI risk-management framework that companies could follow, but this framework is voluntary and many may just adopt certain parts of it or ignore it entirely. Financial industry regulators, such as FINRA, have asserted that current rules and regulations apply to AI. FINRA released notice 24-09 this June asserting that FINRA’s rules apply to AI models, highlighting FINRA rules 2210 and 3110. The SEC stated in its proposed rule Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisers that existing obligations, such as those mandated by regulation best interest, apply to the use of predictive data analytics technologies used in investor interactions. Similarly, using AI-generated deepfakes for fraud is still fraud and regulatory authorities will regulate accordingly.

In the current environment, individual states are left to determine their own AI regulation in many areas. This inconsistency presents challenges for AI in that models must comply with differing rules depending on the state. On the state front the most significant, comprehensive regulation is Colorado’s recent Artificial Intelligence Act, which takes a risk-based approach to regulation. The act specifically seeks to regulate what it calls “high-risk artificial intelligence systems” which they define as “any artificial intelligence system that, when deployed, makes, or is a substantial factor in making, a consequential decision,” stipulating that high-risk systems do not include any systems that “perform a narrow procedural task.” The Act’s goal is to address “algorithmic discrimination” in AI models which they define in detail.

This Act is significant because it is possible that if there is federal AI regulation in the future, then Colorado’s legislation could serve as a template for federal action. In the absence of federal legislation, AI regulation will continue through a state-by-state approach.

Conclusion

Compliance Week’s AI and Compliance Summit was an excellent learning experience that discussed how to responsibly use generative AI through treating it like an intern, illuminated the opportunities for compliance professionals in the development of AI, and clarified the current regulatory landscape for AI, asserting that the same rules that apply to other technologies apply to AI. Compliance professionals should rise to the challenges presented by AI and distinguish themselves by helping businesses innovate and harness AI’s potential for efficiency.

For insight into how compliance professionals in the financial services industry are using and thinking about AI, get the ebook: AI usage survey: Adopters, skeptics, and why it matters.

 

The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity does not assume any duty to update any of the information. Fidelity and any other third parties are independent entities and not affiliated. Mentioning them does not suggest a recommendation or endorsement by Fidelity.

1171464.1.0

Jasmin Sethi

Regulatory & Compliance Advisor to Saifr
Jasmin is a lawyer, economist, entrepreneur, and thought leader with over a decade of experience in the financial industry. She has held numerous legal and compliance positions, as well as been a professor, and is currently the CEO of Sethi Clarity Advisers.

Check out our latest blogs

AI’s double act: where quick wit meets deep thought

AI’s double act: where quick wit meets deep thought

Explore the dual approach to AI that combines rapid intuition with deep analytical thinking to revolutionize complex problem-solving.

Harnessing the power of Gen AI for evaluating AI systems

Harnessing the power of Gen AI for evaluating AI systems

LLMs, a type of Gen AI, can be used to help evaluate AI solutions, from stress testing to test set generation.

Three takeaways from Compliance Week’s AI and Compliance Summit

Three takeaways from Compliance Week’s AI and Compliance Summit

Key topics in the FinServ compliance space include opportunities and challenges for professionals, generative AI supervision, and the US re...