Skip to content
Compliance

Building a GenAI Governance Framework: Takeaways from FINRA’s 2026 Oversight Report

GenAI can present risks for financial services. Read this summary of key risks and considerations for ways firms can strengthen AI governance now.

Introduction

In financial services, Generative AI has quickly moved from experimental technology to operational reality. Firms are deploying GenAI for marketing campaigns, customer communications, anti-money laundering (AML) transaction monitoring, know your customer (KYC) verification processes, and much more. Though the efficiency gains are compelling, this tech transformation carries substantial regulatory implications.

FINRA’s 2026 Annual Regulatory Oversight Report sends a clear message to the industry: The regulatory framework that governs traditional business activities applies equally to GenAI-powered operations. Compliance teams must build governance structures that ensure GenAI deployment aligns with existing supervisory, compliance, communications, and recordkeeping obligations.

 

FINRA’s 2026 Report Identifies Key Risks of GenAI for Compliance Leaders

The report highlights several risk categories that should concern every compliance professional implementing or overseeing GenAI. Some of the risks discussed in the report include:

  • Accuracy and hallucinations present perhaps the most immediate threat. GenAI models can generate plausible sounding but factually incorrect information with remarkable confidence. When these “hallucinations” appear in investor communications, marketing materials, or compliance recommendations, they can mislead customers, create unsuitable product recommendations, or result in incorrect interpretations of rules. A chatbot that fabricates performance data or an AI system that misinterprets regulatory requirements can potentially expose firms to increased scrutiny, enforcement actions, and investor harm.
  • Bias and concept drift introduce more subtle but equally serious challenges. AI models trained on historical data may perpetuate existing biases in marketing targeting, modeling and simulations, process intelligence, or risk assessments. These biases can result in outdated training and skewed outputs. Concept drift compounds this problem as models trained on older data become less accurate over time, particularly in rapidly changing markets. An AML system trained on pre-pandemic transaction patterns, for example, may fail to identify emerging fraud schemes or generate excessive false positives that strain investigative resources.
  • Autonomy of AI agents represents an emerging frontier of risk. Advanced AI agents can independently execute tasks, make decisions, and take actions across multiple systems. While this autonomy promises efficiency, it also creates accountability gaps. Who is responsible when an AI agent initiates an unauthorized trade, sends non-compliant communications, or accesses restricted data? The regulatory supervisory model requires registered human decision-makers, which in turn requires firms to ensure a human touch at critical junctures.
  • Data sensitivity concerns permeate every GenAI application. The development of AI applications often requires access to vast amounts of data for training and operation, including proprietary trading strategies, customers’ personally identifiable information, and confidential business information. Inadequate data governance can lead to unauthorized disclosures, privacy violations, or other cybersecurity concerns.

Compliance Leaders Have Regulatory Obligations for GenAI Use 

FINRA's position leaves no room for ambiguity: Existing regulations apply to GenAI implementations without exception. Firm obligations under rule 3110 extend to supervising GenAI outputs, model behaviors, and ensuring the qualification of personnel who deploy these systems. Firms cannot delegate supervisory responsibility to algorithms.

Rule 2210 governs GenAI-generated marketing content, social media posts, and customer service responses, all of which must meet the rule’s standards. The fact that content is machine-generated does not diminish the firm's responsibility for balance, accuracy, and appropriateness.

Recordkeeping obligations apply to GenAI systems as well. Firms must retain records of business-related communications, supervisory activities, and compliance reviews. This includes maintaining logs of AI prompts, outputs, model versions, training data sources, and human oversight actions. The ability to reconstruct decision-making processes and demonstrate supervisory review is sure to be critical during examinations or enforcement investigations.

 

Compliance Leaders Could Consider Establishing Effective Practices to Manage GenAI Use 

Forward-thinking compliance programs are moving beyond reactive risk management to build comprehensive GenAI governance frameworks.

  • Governance framework development can begin with establishing a cross-functional committee to review and approve all GenAI use cases before deployment, evaluate ongoing performance, and maintain an enterprise-wide inventory of AI applications. Define clear roles, responsibilities, and escalation procedures in the governance structure for identified issues and include regular reporting to senior management and boards of directors.
  • Usage policies provide the foundation for consistent GenAI deployment. Focus on addressing acceptable use cases and communicating these as well as prohibited applications. Also, train personnel to adhere to disclosure requirements when AI is used in customer interactions. Branch office supervision requires particular attention, as remote locations may adopt GenAI tools without proper approvals and oversight. Policies that specify who can authorize GenAI use, what training is required, and how branch managers must monitor AI-assisted activities are fundamental.
  • Testing protocols must go beyond basic functionality checks. Pre-deployment testing should evaluate accuracy across diverse scenarios, assess potential bias in outputs across different models, validate that the system performs reliably under stress conditions, and ensure auditability. Ongoing testing can detect concept drift, identify emerging bias patterns, and verify that model updates haven’t introduced new vulnerabilities. Firms should maintain comprehensive testing documentation, including prompt libraries, reconciliation of expected outputs to actual results, and remediation actions for identified deficiencies.
  • Human-in-the-loop oversight serves as a critical control against AI errors, drift, and overreach. In a regulated environment where a qualified and licensed human is required for prior review and approval of high-risk decisions—such as customer recommendations, AML alerts, complaint responses, and advertising approvals—interjecting human eyes in the processes is essential. The human reviewer must have sufficient expertise to evaluate AI outputs critically with an understanding of how the application fits in the established supervisory system and how effectively the supervisory system can oversee its use. Procedures that help support oversight include defining reviewing qualifications, review standards, documentation requirements, and overriding authority when human judgment conflicts with AI recommendations.
  • Cybersecurity integration demands updated security programs that address AI-specific vulnerabilities. Vendor due diligence must evaluate how third-party AI providers protect firm data—where processing occurs—what security certifications they maintain, and how the firm will be notified of any breaches. A vendor’s threat detection systems should monitor for AI-specific attacks, such as prompt injection, data poisoning, or model extraction attempts. Incident response plans need to address GenAI breach scenarios, including unauthorized access to training data or malicious manipulation of model outputs.
  • Documentation requirements extend throughout the GenAI lifecycle. A good idea might be to maintain model cards that describe each AI system's purpose, capabilities, limitations, training data sources, and known biases. Version control also becomes essential as models are updated or retrained. Additionally, firms should capture who reviewed outputs, what deficiencies were identified, and what corrective actions were implemented in supervisory records. This documentation can provide the foundation for continuous improvement.


The Bottom Line: Don't Wait to Take Action

GenAI’s potential to enhance efficiency and process intelligence and to impact customer service is undeniable, but these benefits come with commensurate regulatory and operational risks. Firms that race to deploy AI without adequate governance create vulnerabilities that can result in customer harm, unintended operational consequences, and heightened regulatory scrutiny. A tailored approach, taking into account the firm’s clients, operations, and human resources, is essential to success.

Compliance leaders should consider strengthening their GenAI frameworks immediately. A firm might start by inventorying the GenAI applications in use or proposed for use, then conducting a comprehensive audit of the use cases across all business lines, with particular focus on marketing and AML/KYC applications where regulatory scrutiny is high. This inventory would identify who deploys each system, what data it accesses, what decisions it influences, and what oversight currently exists.

Update written supervisory procedures to explicitly address GenAI governance, including approval processes, testing requirements, human oversight standards, and documentation obligations. Ideally, these procedures would integrate with existing compliance programs rather than creating parallel structures that complicate supervision.

Implement ongoing monitoring and bias checks that regularly evaluate AI performance against established standards. Administer reconciliations of anticipated versus actual output, ensuring that checks are performed on defined schedules, with increased frequency for high-risk applications or newly deployed systems.

Train staff and sales force on GenAI risks and compliance expectations. Personnel must understand that AI-generated content remains within all traditional regulatory requirements, and that they must adhere to the requirements of the regulators and the firm. Training staff how to identify potential AI errors or bias and when to escalate concerns is an important facet of AI deployment and should be a focus of training. Tailoring training to specific roles, with more intensive programs for employees who directly interact with GenAI systems, is highly advisable.

The regulatory environment for GenAI will continue evolving, but the fundamental principle remains constant: Firms are responsible for their regulatory requirements regardless of whether humans or machines execute them. Building robust governance frameworks today can position compliance programs to adapt as both technology and regulation advance.

 

Source: 

  1. 2026 FINRA Annual Regulatory Oversight Report | FINRA  

     

 


The opinions provided are those of the author and not necessarily those of Saifr or its affiliates. The information is general and educational in nature, is for informational purposes only, and should not be construed as legal advice.

 1246840.1.0  

Lisa Roth

Regulatory Consultant to Saifr
Lisa Roth is an executive with three decades of leadership and entrepreneurial experience in the financial services industry. She is a regulatory compliance consultant and registered principal, plus has been a member of multiple FINRA committees and boards and served in executive capacities at broker-dealers and investment advisers. 

Check out our latest blogs

Building a GenAI Governance Framework: Takeaways from FINRA’s 2026 Oversight Report

Building a GenAI Governance Framework: Takeaways from FINRA’s 2026 Oversight Report

GenAI can present risks for financial services. Read this summary of key risks and considerations for ways firms can strengthen AI governan...

The 2026 FINRA Annual Regulatory Oversight Report

The 2026 FINRA Annual Regulatory Oversight Report

New leadership, new info, same great format

Regulatory AI’s Expanding Role in AML/KYC

Regulatory AI’s Expanding Role in AML/KYC

Explore how AI is revolutionizing AML/KYC in financial services, driving innovation in Banking-as-a-Service, and emphasizing ethical AI pra...