Skip to content
AI

Four ethical principles for developing trustworthy AI

Human involvement, transparency, technical robustness, and bias mitigation can help businesses minimize AI risks and ensure regulatory accountability.

Artificial intelligence (AI) is a powerful tool that can enable businesses to streamline their operations and improve worker productivity. Yet, mishandling of AI’s development, usage, and/or oversight could lead to its making flawed judgements. Overcoming the challenges of AI is not easy. Yet, if businesses can establish ethical AI that they can trust to function as intended, then they can minimize financial and compliance risks.

The European Commission established an independent group of 52 AI experts which created principles for developing a trustworthy AI. Of those principles, four stand out as important from the perspective of developing ethical AI that is accountable to regulators.

1. Human involvement and governance

The European Commission expressed that AI should empower humans to make informed judgements. Further, the commission stated that businesses need to ensure proper oversight of AI tools to help them function as intended.

While some workers are fearful that AI may replace them, most realize that AI can function as a tool to augment their work and act as a coach. Businesses should understand that human involvement in AI systems leads to better control over AI and allows workers to excel. Human partnership is one way to help AI produce trustworthy outputs that a business can rely on.

To that end, businesses can employ human in the loop processes throughout the lifecycle of AI to improve accuracy and reliability. A human in the loop approach involves humans in the machine learning and testing stages of constructing an algorithm. In all stages of AI development and deployment, humans can fine-tune AI to help prevent bias, avoid algorithmic flaws, and facilitate explainability.

2. Transparency and explainability

Transparency is another principle the European Commission emphasized, noting that AI systems and decisions should be transparent and explainable to stakeholders. Humans should understand the capabilities and limitations of the AI systems with which they interact.

Illustrating the logic of AI models is vital, especially when we trust AI with important data, or even our lives, in medical use cases. The main challenge businesses face in transparency is the complexity of AI. The more complex an AI is, the more difficult it becomes to explain.

Explainability is necessary in order to satisfy regulatory accountability. If a CFO or Planning Head cannot explain why an AI is using a particular dataset or how an AI analyzes and interprets that data, then they cannot act on the AI’s recommendations. This uncertainty in understanding AI systems could result in financial risks. Stakeholders may not trust an AI system they do not understand.

One way to overcome that challenge is to reduce AI’s complexity. When an AI application is too complex, it becomes impossible to explain to stakeholders. To reduce complexity, businesses should consider adopting white box AI models. White box models are more accurate and involve only a few rules and pathways on a decision tree, which means they are simpler to explain. AI models can still be complex, but managing complexity will help both businesses and stakeholders understand an AI’s decisions.

3. Technical robustness and safety

The European Commission explained that an AI should be robust and secure. They urged that AI systems should be reliable and have a fallback plan in case something goes wrong.

For example, AI should have protections in place to prevent data loss or theft. Without proper security measures, a data breach or data loss could lead to massive damages for both a company and its investors. The global average cost of data breaches in general in 2023 was $4.45 million.

Robustness and stability of AI are crucial for AI reliability and regulatory accountability. Businesses cannot afford to allow AI to make mistakes, break, or fail. Otherwise, the AI could provide erroneous outputs that pose financial and/or regulatory compliance risks.

To implement robustness in an AI application, companies should consider establishing an AI Center of Excellence that focuses on the curation of ethical AI through the oversight of AI learning processes and the enforcement of ethical principles. The majority of businesses building and utilizing AI have established AI Centers of Excellence to underscore the critical role data scientists play in developing ethical, robust AI systems. AI Centers of Excellence help demonstrate an AI model’s trustworthiness to internal and external stakeholders alike.

White paper download | Considering AI solutions for your business? Ask the right questions.

4. Diversity, non-discrimination, and bias mitigation

The fourth ethical principle highlighted from the European Commission is avoidance of unfair bias when training an AI. Marginalization of vulnerable minority groups and bolstering of discrimination and prejudice through AI biases can pose significant reputational problems.

Data outputs that favor a particular race and gender are inherently flawed. Such outputs can lead to compliance risks through discriminatory hiring practices or resource allocation. To that end, diversity in AI development is crucial.

When AI development teams are diverse, they are more likely to anticipate potential biases that may appear in the AI. Businesses can adjust hiring practices to create a diverse team. Several organizations are taking steps to empower diverse AI programmers through community networking, research, and education. One organization has created a membership directory that can help businesses find great, diverse minds in AI to connect with.

Bias mitigation is important for any business using AI, as it can provide an edge in producing unbiased results and protect investors’ and consumers’ interests.

Conclusion

With human involvement, transparency, security, and bias mitigation, a business can more effectively hone AI to function ethically and better ensure regulatory accountability. Businesses can utilize white box and human in the loop models along with diverse development teams and AI Centers of Excellence to help achieve that goal. The European Commission’s principles provide tangible benefits to the operation of an organization and the protection of its reputation while also giving investors clear information on those operations. As long as a business has a trustworthy AI and the systems in place to support it, then it can mitigate many of the risks AI poses.

Are you considering AI solutions for your business? Make sure you ask the right questions. 

 

The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity does not assume any duty to update any of the information. Fidelity and any other third parties are independent entities and not affiliated. Mentioning them does not suggest a recommendation or endorsement by Fidelity. The information regarding AI tools provided herein is for informational purposes only and is not intended to constitute a recommendation, development, security assessment advice of any kind.

1121633.1.0

Jasmin Sethi

Regulatory & Compliance Advisor to Saifr
Jasmin is a lawyer, economist, entrepreneur, and thought leader with over a decade of experience in the financial industry. She has held numerous legal and compliance positions, as well as been a professor, and is currently the CEO of Sethi Clarity Advisers.

Check out our latest blogs

AI’s double act: where quick wit meets deep thought

AI’s double act: where quick wit meets deep thought

Explore the dual approach to AI that combines rapid intuition with deep analytical thinking to revolutionize complex problem-solving.

Harnessing the power of Gen AI for evaluating AI systems

Harnessing the power of Gen AI for evaluating AI systems

LLMs, a type of Gen AI, can be used to help evaluate AI solutions, from stress testing to test set generation.

Three takeaways from Compliance Week’s AI and Compliance Summit

Three takeaways from Compliance Week’s AI and Compliance Summit

Key topics in the FinServ compliance space include opportunities and challenges for professionals, generative AI supervision, and the US re...