Skip to content
AI

Seven elements of ethical AI to guide its implementation by compliance

Learn about the EU's seven principles of ethical AI and how they can help compliance officers responsibly implement AI tools and systems.

Artificial Intelligence (AI) continues to grow and evolve every day, changing the lives of millions. Companies are increasingly facing an imperative to innovate with AI or possibly get left behind. The financial sector faces unique challenges with AI due to high regulatory and compliance standards in the industry.

Luckily, the European Commission established a group of 52 experts on AI to conduct a study that establishes guidelines for trustworthy AI.1 The report states three foundational requirements for AI: it should be lawful, ethical, and robust. The group then provides seven considerations that expand on the foundational requirements, focusing on ethics and robustness more than legality.

In this article, we unpack the guidelines’ seven elements of ethical AI and show examples of these ideas at work in the financial sector.

1. Human oversight

While technology is rapidly developing and capable of changing how we view investing, the EU recommends humans play an active role throughout the development process for AI to ensure responsible growth.

A case study of a machine learning technology company demonstrates the successes that can come from human oversight in the AI development process.2 A large multi-national bank wanted to improve their AML (anti-money laundering) investigations to go beyond the regulatory minimums and become more proactive with respect to risk management. The bank engaged with an AI technology company that brought in data scientists and a project manager who partnered with subject matter experts and staff from the bank. The bank employees aided in data processing while the AI data scientists conducted feature engineering and ran the data through AI models once the data was properly prepared. The quality control and thoughtful design from the partnership of data scientists and business experts enabled the AI model to find data relationships that were undetectable by humans to then more accurately flag concerning transactions. Upon review by the bank’s internal review board and the regulatory agency, the bank decided to deploy the technology globally.

2. Technical robustness and safety

AI systems must be technically robust and safe. Adversity can come at any time and AI systems need to be prepared. Technical robustness means that a system has the ability to cope with errors or breaches during execution and deal with inaccurate inputs. Data quality lapses in the financial sector can be particularly high-stakes, as shown through an incident at a credit reporting company that led to some credit scores being misreported by as much as 25 points.3

Creating fallback plans and implementing thorough technical training programs emphasizing the virtues of reliability and consistency can help increase the robustness of AI. In addition, regular testing helps prevent breaches from creating new issues that may present themselves in this ever-changing landscape. Lastly, external solutions from AI technology companies can test systems for accuracy and provide analytics for companies to continually refine their AI programs.

3. Privacy and data governance

To operate effectively, governance mechanisms need to be in place to help ensure privacy and data protection. Customers should be protected from data breaches and hackers. Companies can choose to form their own data privacy teams or hire an outside firm for assistance, but companies are increasingly hiring a Chief AI Officer to ensure policies and procedures reflect industry best practices. Strong data governance policies and procedures tend to correlate with greater customer trust, contributing to a successful business.

Ebook download | AI insights survey: Adopters, skeptics, and why it matters.

4. Transparency

Transparency means AI’s decisions are accessible to customers so that the system’s decisions and operations are explainable by humans. In addition, users should know they are interacting with machines as opposed to other humans. For instance, chatbots are becoming more frequent on investing platforms and companies must make it very clear that consumers are speaking with a robo-advisor. AI recommendations accompanied by explanation and justification help promote the ethical and responsible use of AI.

5. Diversity, non-discrimination, and bias mitigation

AI systems should engage with a diverse clientele so that unintended bias is avoided. Unchecked, AI can lead to marginalization or prejudice towards specific groups. Bias can be difficult to eliminate because not all groups are represented in the creation of AI and not all groups may use AI in the same way and to the same degree. Promoting inclusion and ensuring the option of use by all is an important first step in mitigating discrimination. Bias can further be counteracted with continuous assessments and adjustments throughout the development and deployment stages. Publication of data can also incentivize deploying unbiased AI. Additionally, using diverse data sets has been proven to lower the risk of bias.

6. Societal and environmental well-being

Systems should benefit society and the environment as a whole and be created in a manner that promotes sustainability for future generations to reap the benefits as well. Profits are important but corporate social responsibility means being cognizant of the long-term impacts of AI decisions. Indeed, keeping up with global trends and key considerations regarding how companies run their businesses will take time and effort, but those burdens are outweighed by the benefits associated with responsibly developing AI to maximize the financial sector.

7. Accountability

Lastly, companies should put accountability measures in place to correct any programs or systems that violate the ethical AI principles, thereby helping to prevent undesirable and potentially harmful outcomes. While trust in AI systems is important, a layer of protection is needed to facilitate trust. Consequently, repercussions for breaches of trust must occur. Companies may state policies in employment contracts or in company bylaws clearly explaining whether the company, specific employee, manager, or anyone else is liable for AI that performs unethically.

Conclusion

AI is here to stay, and many components are needed to provide AI with sufficient guidance to keep it productive and innovative while remaining unbiased, transparent, and sustainable. The EU’s guidelines provide an overview of the key issues, and all stakeholders should acquaint themselves with these important elements of a rapidly developing technology.

To learn how compliance professionals at U.S. financial institutions are using and thinking about AI, download our ebook, AI insights survey: Adopters, skeptics, and why it matters.

 

1. High-Level Expert Group on Artificial Intelligence. (2019, April 8). Ethics Guidelines for Trustworthy AI. Brussels; European Commission. ai-ethics-guidelines.pdf (aepd.es)

2. Faggella, D. (2018, December 12). Bank reduces money-laundering investigation effort with ai. Emerj Artificial Intelligence Research. https://emerj.com/ai-case-studies/bank-reduces-money-laundering-investigation-effort-with-ai/

3. Equifax statement on recent coding issue. Equifax. (2022, August 2). https://www.equifax.com/newsroom/all-news/-/story/equifax-statement-on-recent-coding-issue/

The information regarding AI tools provided herein is for informational purposes only and is not intended to constitute a recommendation, development, security assessment advice of any kind.

The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity does not assume any duty to update any of the information. Fidelity and any other third parties are independent entities and not affiliated. Mentioning them does not suggest a recommendation or endorsement by Fidelity.

1109254.1.0

Jasmin Sethi

Regulatory & Compliance Advisor to Saifr
Jasmin is a lawyer, economist, entrepreneur, and thought leader with over a decade of experience in the financial industry. She founded Sethi Clarity Advisers in 2018 and is a consultant to Saifr. Jasmin was a Vice President in BlackRock’s Legal and Compliance group, Special Counsel at the Securities and Exchange Commission’s Division of Trading and Markets in the Office of Market Supervision, and was an adjunct professor of law at Georgetown University Law Center and SEC University. Earlier in her career, Jasmin was an Associate at Mayer Brown in Washington, D.C., where she practiced general litigation. Jasmin received her JD, PhD in economics, and undergraduate degrees from Harvard University. As a Fulbright Scholar in 2001, she earned an MSc in Economics from the London School of Economics and Political Science.

Check out our latest blogs

Compliance concerns: Workloads, knowledge gaps, and separate systems

Compliance concerns: Workloads, knowledge gaps, and separate systems

A Saifr survey revealed the top concerns of compliance departments in financial services: workloads, knowledge gaps, and internal systems.

Q1 2024 regulatory roundup: all things AI and enforcement

Q1 2024 regulatory roundup: all things AI and enforcement

Q1 saw continued regulatory focus on artificial intelligence, with recent enforcements indicating AI is a regulatory priority.

Lessons learned in AI risk management from Compliance Week 2024

Lessons learned in AI risk management from Compliance Week 2024

The hot topic at Compliance Week 2024 was AI risk management. To combat AI risks, compliance officers can implement frameworks, policies, a...