Skip to content
Regulatory

Lessons learned in AI risk management from Compliance Week 2024

The hot topic at Compliance Week 2024 was AI risk management. To combat AI risks, compliance officers can implement frameworks, policies, and monitoring.

I had the pleasure of attending the Compliance Week 2024 conference in Washington D.C. this April where over 400 compliance professionals came together to discuss some of the most challenging topics in compliance. AI risk management within companies was a particularly strong focus throughout the conference, and I learned much about what actions companies should consider taking to help mitigate AI risks within their organization. Here are some of my takeaways for compliance officers regarding AI risks and recommended steps on how to mitigate them.

AI risks

AI risks range in significance and include AI inaccuracy and drift, data privacy risks, and lack of model transparency.

Inaccuracy of AI

Organizations are more concerned about inaccuracy in AI models than any other risk. If an organization does not implement strict human oversight over the models they use, the AI may end up drifting over time, becoming less accurate. This error could have severe repercussions for an organization, especially if they make important decisions based on the information an AI model provides.

Data-related risks

AI poses other risks including data privacy concerns, biases within training data sets, misuse of AI by employees, and the inability to know when and where people are using AI within an organization. When employees input confidential information into an AI model that information has been compromised. Others with access to the model can retrieve that information through exploitation, which poses a data privacy risk to both individuals and organizations. It is difficult to know when an employee is using AI and whether they are being careful with sensitive information, especially when the AI is an external one. Therefore, data privacy risks can be difficult to mitigate.

Transparency in third-party tools

When using AI from a third party, establishing transparency on the model training process and model oversight is vital. Transparency can help an organization more easily identify the potential risks of using a model. The same goes for vendors who use AI. It is important to know what they use on the back end. Ask where the models came from, who trained the models, do they have a human in the loop, and whether the training data is representative of diverse groups, etc.

How to mitigate risks

Compliance officers can help mitigate risks by developing AI frameworks, creating company policies, monitoring its use, and training employees on safe practices.

AI frameworks

In the absence of robust regulation on AI by the federal government, organizations are vulnerable to the myriad risks that AI poses. Regulations are coming, but the process is slow. In the meantime, it is up to organizations to take action to protect themselves by developing a framework within their organization for the utilization of AI.

White paper | Considering AI solutions for your business? Ask the right questions.

The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework serves as a strong example for companies on which to base their own framework. NIST’s framework builds on and supports AI risk management frameworks that already exist, and has four core functions. The govern function helps create an AI risk management culture within organizations. The map function enhances an organization’s risk identification capacity. The measure function utilizes analytical tools and techniques to monitor AI. Finally, the manage function involves allocating resources to identified risks in order to resolve them, as well as adjusting risk management processes moving forward, based on the nature of each incident.

Company policies

Organizations should implement their own policies on AI and how it is used. These policies can also go hand in hand with adapting the organization’s code of ethics to include AI. Through these policies, an organization can help prevent the misuse of AI within their ranks by setting clear standards and goals for the use of AI.

To help enact these policies, an organization should consider setting an AI oversight strategy that identifies what processes are impacted by AI and who oversees those processes. Organizations should also assess the particular risks AI poses for themselves and center their policies around the risks that are most salient to their goals.

Monitoring and training

Organizations should regularly audit and monitor their AI to prevent biases and drift. Establishing a specialized AI ethics committee that is specific to the needs of the organization can help streamline monitoring.

Further, organizations should train their own employees on AI and how to use it. Formal training is an effective way to mitigate misuse and help employees understand the need and purpose of AI policies the organization has set.

The future of AI risk management

I gained valuable insight into AI risk management from Compliance Week 2024. AI regulation is already underway both in the United States and abroad, but it will likely be years before the regulatory framework becomes reliable enough to support companies. In the meantime, organizations can and should take action to protect themselves against the risks by developing their own AI risk management framework and implementing their own AI policies. Internal policies are important because there is no one-size-fits-all approach. The risks posed by AI are too varied. An organization can help see its own goals to fruition by tailoring its mitigation measures to its particular risks.

 

The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity does not assume any duty to update any of the information. The information regarding AI tools provided herein is for informational purposes only and is not intended to constitute a recommendation, development, security assessment advice of any kind.

1141791.1.0

Jasmin Sethi

Regulatory & Compliance Advisor to Saifr
Jasmin is a lawyer, economist, entrepreneur, and thought leader with over a decade of experience in the financial industry. She has held numerous legal and compliance positions, as well as been a professor, and is currently the CEO of Sethi Clarity Advisers.

Check out our latest blogs

Harnessing the power of Gen AI for evaluating AI systems

Harnessing the power of Gen AI for evaluating AI systems

LLMs, a type of Gen AI, can be used to help evaluate AI solutions, from stress testing to test set generation.

Three takeaways from Compliance Week’s AI and Compliance Summit

Three takeaways from Compliance Week’s AI and Compliance Summit

Key topics in the FinServ compliance space include opportunities and challenges for professionals, generative AI supervision, and the US re...

Enforcements are the theme of the quarter

Enforcements are the theme of the quarter

In Q3 2024, the SEC and FINRA focused on social media marketing enforcements, emphasizing transparency and accuracy to protect investors.