Skip to content
Regulatory

Lessons learned in AI risk management from Compliance Week 2024

The hot topic at Compliance Week 2024 was AI risk management. To combat AI risks, compliance officers can implement frameworks, policies, and monitoring.

I had the pleasure of attending the Compliance Week 2024 conference in Washington D.C. this April where over 400 compliance professionals came together to discuss some of the most challenging topics in compliance. AI risk management within companies was a particularly strong focus throughout the conference, and I learned much about what actions companies should consider taking to help mitigate AI risks within their organization. Here are some of my takeaways for compliance officers regarding AI risks and recommended steps on how to mitigate them.

AI risks

AI risks range in significance and include AI inaccuracy and drift, data privacy risks, and lack of model transparency.

Inaccuracy of AI

Organizations are more concerned about inaccuracy in AI models than any other risk. If an organization does not implement strict human oversight over the models they use, the AI may end up drifting over time, becoming less accurate. This error could have severe repercussions for an organization, especially if they make important decisions based on the information an AI model provides.

Data-related risks

AI poses other risks including data privacy concerns, biases within training data sets, misuse of AI by employees, and the inability to know when and where people are using AI within an organization. When employees input confidential information into an AI model that information has been compromised. Others with access to the model can retrieve that information through exploitation, which poses a data privacy risk to both individuals and organizations. It is difficult to know when an employee is using AI and whether they are being careful with sensitive information, especially when the AI is an external one. Therefore, data privacy risks can be difficult to mitigate.

Transparency in third-party tools

When using AI from a third party, establishing transparency on the model training process and model oversight is vital. Transparency can help an organization more easily identify the potential risks of using a model. The same goes for vendors who use AI. It is important to know what they use on the back end. Ask where the models came from, who trained the models, do they have a human in the loop, and whether the training data is representative of diverse groups, etc.

How to mitigate risks

Compliance officers can help mitigate risks by developing AI frameworks, creating company policies, monitoring its use, and training employees on safe practices.

AI frameworks

In the absence of robust regulation on AI by the federal government, organizations are vulnerable to the myriad risks that AI poses. Regulations are coming, but the process is slow. In the meantime, it is up to organizations to take action to protect themselves by developing a framework within their organization for the utilization of AI.

White paper | Considering AI solutions for your business? Ask the right questions.

The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework serves as a strong example for companies on which to base their own framework. NIST’s framework builds on and supports AI risk management frameworks that already exist, and has four core functions. The govern function helps create an AI risk management culture within organizations. The map function enhances an organization’s risk identification capacity. The measure function utilizes analytical tools and techniques to monitor AI. Finally, the manage function involves allocating resources to identified risks in order to resolve them, as well as adjusting risk management processes moving forward, based on the nature of each incident.

Company policies

Organizations should implement their own policies on AI and how it is used. These policies can also go hand in hand with adapting the organization’s code of ethics to include AI. Through these policies, an organization can help prevent the misuse of AI within their ranks by setting clear standards and goals for the use of AI.

To help enact these policies, an organization should consider setting an AI oversight strategy that identifies what processes are impacted by AI and who oversees those processes. Organizations should also assess the particular risks AI poses for themselves and center their policies around the risks that are most salient to their goals.

Monitoring and training

Organizations should regularly audit and monitor their AI to prevent biases and drift. Establishing a specialized AI ethics committee that is specific to the needs of the organization can help streamline monitoring.

Further, organizations should train their own employees on AI and how to use it. Formal training is an effective way to mitigate misuse and help employees understand the need and purpose of AI policies the organization has set.

The future of AI risk management

I gained valuable insight into AI risk management from Compliance Week 2024. AI regulation is already underway both in the United States and abroad, but it will likely be years before the regulatory framework becomes reliable enough to support companies. In the meantime, organizations can and should take action to protect themselves against the risks by developing their own AI risk management framework and implementing their own AI policies. Internal policies are important because there is no one-size-fits-all approach. The risks posed by AI are too varied. An organization can help see its own goals to fruition by tailoring its mitigation measures to its particular risks.

 

The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity does not assume any duty to update any of the information. The information regarding AI tools provided herein is for informational purposes only and is not intended to constitute a recommendation, development, security assessment advice of any kind.

1141791.1.0

Jasmin Sethi

Regulatory & Compliance Advisor to Saifr
Jasmin is a lawyer, economist, entrepreneur, and thought leader with over a decade of experience in the financial industry. She founded Sethi Clarity Advisers in 2018 and is a consultant to Saifr. Jasmin was a Vice President in BlackRock’s Legal and Compliance group, Special Counsel at the Securities and Exchange Commission’s Division of Trading and Markets in the Office of Market Supervision, and was an adjunct professor of law at Georgetown University Law Center and SEC University. Earlier in her career, Jasmin was an Associate at Mayer Brown in Washington, D.C., where she practiced general litigation. Jasmin received her JD, PhD in economics, and undergraduate degrees from Harvard University. As a Fulbright Scholar in 2001, she earned an MSc in Economics from the London School of Economics and Political Science.

Check out our latest blogs

Budgets, channels, and turnaround times: top concerns of marketing

Budgets, channels, and turnaround times: top concerns of marketing

Learn how marketing departments manage content creation challenges including budget decisions, a growing list of platforms, and compliance ...

Compliance concerns: Workloads, knowledge gaps, and separate systems

Compliance concerns: Workloads, knowledge gaps, and separate systems

A Saifr survey revealed the top concerns of compliance departments in financial services: workloads, knowledge gaps, and internal systems.

Q1 2024 regulatory roundup: all things AI and enforcement

Q1 2024 regulatory roundup: all things AI and enforcement

Q1 saw continued regulatory focus on artificial intelligence, with recent enforcements indicating AI is a regulatory priority.