Skip to content
AI

New Executive Order impacts the training and testing of trustworthy AI

Learn how President Biden's executive order on AI aims to protect Americans from potential risks and what compliance officers can do in response.

On October 30, 2023, President Biden issued an executive order directing “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems” in response to the Administration’s increasing concerns about artificial intelligence (AI). The order defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.”

The Order states “responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation.”

How does this Executive Order promote safe and trustworthy AI?

This Order’s directives build on a previously existing blueprint for an AI Bill of Rights, the National Institute of Standards and Technology (NIST) AI Risk Management Framework, and existing voluntary disclosures. It covers the intersection of AI and a broad variety of issues and fields, such as general cybersecurity and safety, intellectual property, privacy, competition, American leadership abroad, healthcare, the federal government’s use of AI, and civil rights.

In addition to establishing testing and reporting requirements for developers of the most powerful AI systems, the Order also empowers over a dozen agencies to use their authority to “protect American consumers from fraud, discrimination, and threats to privacy and to address other risks that may arise from the use of AI.” Compliance officers should expect rulemaking action as well as clarification on existing regulation as it applies to AI from a variety of agencies.

For instance, taking fraud as an area of focus, within 240 days of the Order, the Secretary of Commerce, in consultation with other relevant agencies, will submit a report to the Director of the Office of Management and Budget (OMB) and the Assistant to the President for National Security Affairs detailing standards, tools, practices, and methods of content authentication and watermarking to clearly detect and label synthetic content. The purpose of this report is to identify existing methods and potential future techniques of reducing AI-enabled fraud and other risks posed by synthetic or AI-generated content.

Despite the Executive Order’s stated purpose of addressing AI’s ability to “exacerbate societal harms such as fraud, discrimination, bias, and disinformation,” the actual Order sparingly mentions fraud. While the Department of Commerce is responsible for issuing a report detailing the best paths to detecting, labeling, and tracking synthetic content and the OMB with issuing guidance for labeling and authenticating content, explicit anti-fraud provisions are not detailed in the order.

In addition, following the issuance of the Order, Vice President Harris announced a new draft policy released by the OMB that would establish AI governance structures in federal agencies. A case of the government leading by example. The OMB requested comments on the policy with the comment period open until December 5, 2023.

Who must implement the Executive Order?

The White House AI Council, which consists of The Assistant to the President and Deputy Chief of Staff for Policy, and other agency heads such as the Secretary of State, will oversee the execution of the Executive Order. Furthermore, the Order tasks regulatory authorities, such as the Departments of Commerce, Energy, and Homeland Security, with establishing “guidelines and best practices, with the aim of promoting consensus industry standards, for developing and deploying safe, secure, and trustworthy AI systems” within 270 days of the Order. Therefore, much of the thrust of the Order will come from the resources, guidelines, and guidances that the Order tasks various members of the AI council to develop. Compliance officers should closely monitor relevant regulatory authorities and agencies for new requirements and obligations; the extent of coordination, however, is still to be seen as a lot of discretion will remain with agency heads.

White paper | Considering AI solutions for your business? Ask the right questions.

A few specific requirements are noteworthy. The Order directs several agencies to set the standards for red-team testing, a structured testing method that identifies flaws and vulnerabilities as well as harmful or unwanted behavior in AI systems. This testing should help ensure safety before public release of new AI systems and be set within 270 days of the Order. In addition, the Department of Homeland Security will establish a designated Artificial Intelligence Safety and Security Board comprised of industry experts in the private sector, government, and academia to provide the government advice, information, or recommendations for improving security, resilience, and incidence response relating to the usage of AI in critical infrastructure.

The timing of the actions ordered by the Order varies, with the most burdensome task, establishing four new Research Institutes, given up to 540 days. However, compliance officers should expect the majority of authorities to submit reports, assessments, and proposed advice and plans to their respective agency heads within 90-270 days of the Order, and rules and guidance to be proposed within 180-365 days of the Order. New regulations will go through proposal and adoption phases. Thus, a coherent, cross-agency AI framework may not emerge for at least one year, and how coordinated it is across the different governmental departments is to be seen.

What is the broader context for this Executive Order?

Interestingly, this executive order comes on the scene soon after the comment period closed for the recently proposed SEC rule on AI and conflicts of interest. This proposal has been met with significant criticism and Republican Senators grilled Chair Gensler over not just this rule, but also the pace of his rulemaking agenda. Concerns were expressed about the SEC’s lack of experience with AI and such concerns will likely be raised about future potential rulemakings in this area as other agencies enter the rulemaking arena. Some Senators described the SEC’s proposed rule as half-baked and criticized the agency for not addressing a market failure like the financial crisis of 2008. To this criticism Chair Gensler replied that if AI regulation isn’t taken seriously, then it is nearly unavoidable that AI will spark the next financial crisis as soon as the early 2030s.

The US is slowly moving in the direction of the EU which has already established a group of experts who have promulgated guidelines listing seven requirements that developers and deployers of AI systems should meet in order to be trustworthy. Getting clarity and uniformity around the use of AI in the US will be no doubt challenging but may also be helpful to industry when pursuing innovations in this space.

Conclusion

This Executive Order signifies an advancement in the regulation and standardization of AI. By striking a balance between the potential benefits and risks, the government is attempting to shape a future in which AI is robust and reliable. Regulatory bodies and agencies now face the challenging task of implementing the guidelines and standards laid out by the White House Council on AI. Going forward, compliance officers will play a crucial role in closely monitoring evolving requirements and obligations to help ensure adherence to the emerging frameworks. As technology continues to evolve, this order aims to create an environment where companies develop, deploy, and utilize AI responsibly.

If you're evaluating AI solutions for your business, download this white paper to learn the essential questions to ask AI vendors.

 

The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity does not assume any duty to update any of the information.

1121491.1.0

Jasmin Sethi

Regulatory & Compliance Advisor to Saifr
Jasmin is a lawyer, economist, entrepreneur, and thought leader with over a decade of experience in the financial industry. She founded Sethi Clarity Advisers in 2018 and is a consultant to Saifr. Jasmin was a Vice President in BlackRock’s Legal and Compliance group, Special Counsel at the Securities and Exchange Commission’s Division of Trading and Markets in the Office of Market Supervision, and was an adjunct professor of law at Georgetown University Law Center and SEC University. Earlier in her career, Jasmin was an Associate at Mayer Brown in Washington, D.C., where she practiced general litigation. Jasmin received her JD, PhD in economics, and undergraduate degrees from Harvard University. As a Fulbright Scholar in 2001, she earned an MSc in Economics from the London School of Economics and Political Science.

Check out our latest blogs

Compliance concerns: Workloads, knowledge gaps, and separate systems

Compliance concerns: Workloads, knowledge gaps, and separate systems

A Saifr survey revealed the top concerns of compliance departments in financial services: workloads, knowledge gaps, and internal systems.

Q1 2024 regulatory roundup: all things AI and enforcement

Q1 2024 regulatory roundup: all things AI and enforcement

Q1 saw continued regulatory focus on artificial intelligence, with recent enforcements indicating AI is a regulatory priority.

Lessons learned in AI risk management from Compliance Week 2024

Lessons learned in AI risk management from Compliance Week 2024

The hot topic at Compliance Week 2024 was AI risk management. To combat AI risks, compliance officers can implement frameworks, policies, a...