Blog

AI was top of mind at SIFMA C&L 2024

Written by Lisa Roth | Apr 18, 2024 8:19:23 PM

SIFMA’s C&L annual seminar is the preeminent compliance and legal event of the year for industry professionals; and in 2024, there was a buzz about artificial intelligence (AI). According to a survey of SIFMA members and attendees, 75% of attendees put AI at the forefront of their interests. Attendees got a good dose of what they sought as AI was the primary topic in the presentations, 25% or more of the panels, and well-represented among event sponsors.

And, it could not have come at a better time. Survey results published by SIFMA showed that respondents were nearly evenly divided on the status of AI in their organizations. Responses ranged from 39% acknowledging limited usage, 33% in the testing but not deployed phase, and 30% reporting no implementation at all. With these statistics in mind, the audience at SIFMA C&L was very receptive to guidance from the panels, which were stacked with expertise.

Ebook download → AI insights survey: Adopters, skeptics, and why it matters.

Panels

I registered for the express purpose of keeping my AI knowledge up to date, and my investment was well-spent. The questions I started with were answered by the panels, and I am confident the handouts will be useful resources going forward. Throughout the event, panel presenters challenged attendees with a broad range of considerations from overarching queries to use case-specific deployments. One panel asked about generative AI in the workplace: “Are you ready for it?” In the context of investment banking, the panel queried: “Is a ChatGPT response equal to a research report?”

In context of compliance with prudential bank regulations, a panel addressed how AI and ML (machine learning) might impact 1st- and 2nd-line supervision and monitoring. The panel also discussed the impact AI may have on the background, education, and training of compliance personnel.

A panel dedicated to AI technologies provided a meaningful framework for their discussion of regulatory and legislative actions, including federal actions (the White House’s Blueprint for an AI Bill of Rights and President Biden’s Executive Order on AI) and a number of state actions. It is noteworthy that, at the time of the seminar, 18 states have issued guidance or enacted legislation to govern AI use.

One panel cited an enforcement action that has resulted from firms’ use of AI, pointing to a FINRA fine of more than $2M for system failures involved in automated data fees that resulted in inaccurate disclosures, alerting attendees to the reality of the regulator’s reach relative to AI deployments.

One session’s polling question asked:

“Of the following AI use cases, which are the most interesting to you?

1) Tools that easily translate documents.

2) Tools that allow me to ask questions of policies and get back a plain English explanation.

3) Tools that allow me to summarize contracts, or create comparisons of contracts.

4) Tools that accompany meetings and create summaries of what was discussed.

5) Tools to let me take over the world."

In all, I found the panels to be stacked with expertise and delivered valuable information. As for the polling question above, I was not present to see the results, but I have to think at least one aspiring professional opted for response #5. It does seem that the sky may be the limit for AI applications in our industry.

Use cases

Panelists provided AI use cases that opened the door to a myriad of possibilities very much in sync with the inquiries by my colleagues and clients. The pure scope and variety demonstrated just how likely it is that AI will find its way into an enterprise and how meaningful it is for firms to consider how and when AI might be relevant for their operations.

Attendee survey responses were equally varied. It was especially notable that, given common choices for AI usage such as ECOM, document review, AML, trade surveillance, and KYC, the highest ranking response was “other!” ECOM and document review each trailed by more than 10%, making the point that a great number of applications outside the mainstream are in use or being contemplated by industry professionals.

One panel listed use cases on a department-by-department basis, categorizing AI deployment across administrative departments including back-office compliance/legal departments, research departments, and others. They listed automated customer onboarding, ECOM, document reviews, fraud detection, aggregation of regulatory data sources, and content generation, such as policies and procedures, as potential use cases within the compliance and legal department.

Another panel noted that in one way or another, all modern AI largely focuses on discrete problems or works in a specific domain. This panel introduced a characterization of AI as:

  • Symbolic/classical—providing predefined rules to detect potentially fraudulent behavior,
  • Connectionist—providing personalized investment advice and financial planning recommendations based on rules in an interconnected network, and
  • Generative—designed to produce human-like responses.

Some of the use cases identified by this panel included social media analytics, the use of computer vision techniques to analyze volatility services, and identification of complex trading patterns in large data sets, among many others.

Another panel discussed use cases in research where AI demonstrates significant potential for creating efficiencies and helping analysts and clients sort and examine vast amounts of data. The panel listed automated updates to analysts’ models, enhanced means to summarize published research, and AI-generated analysis of companies’ earnings reports among the potential applications for AI in research departments.

It was suggested by one panel that the use cases be considered along the lines of non-generative AI and generative AI, where non-generative is the traditional use making a prediction based on a given set of data and generative AI was described as AI that creates or generates data based on prompts from the user. The panel explained that generative AI includes LLMs, ChatGPT, and Dall-E, which use deep learning techniques to understand, summarize, generate, and predict new content.

Ebook download → AI insights survey: Adopters, skeptics, and why it matters.

I was not surprised that panels varied in their characterizations of how AI might be considered. I found the variety to be reassuring, since each attendee likely has a different vision for AI deployment in their firm. There seems to be plenty of space for customization in applications of these technologies. For sure, I was left with frameworks in mind that would be applicable to my clients.

Risk assessment and monitoring

Over the course of the event, attendees were exposed to practical guidance for risk monitoring.

One panel stressed that the stakes are particularly high with AI due to our often limited understanding of the models, which might make it difficult to correct course after deployment. The panel also discussed how flawed inputs into an AI model might change the way the model functions, resulting in supervisory challenges even after corrective action is taken. One panel described how generative AI differs from more traditional types of AI and explained that it can create new content, generate human-like text based on its understanding of data, be predictive in nature and, therefore, may not be entirely reliable. In a meaningful alert to attendees, the panel noted that generative AI provides responses with confidence, even when the response may not be accurate. Left to consider the nature of these risks, I must admit that I raised a caution flag to be lowered on further consideration.

In light of recent regulatory guidance, proposed rulemaking, and federal/state actions and legislation, attendees were advised that their role may involve “explainability.” It is fundamental to establishing trust in a model’s output as well as a foundational component in meeting compliance expectations. I was interested in the materials for hints and examples that might help to form the foundation for compliance professionals. Though somewhat thin in that regard, I will continue to work with my clients to ensure that they understand and can effectively describe technologies they are proposing, deploying, or tasked with monitoring.

In the way of practical guidance, several panels mentioned the recently released National Institute of Standards and Technology (NIST) AI Risk Framework (AI RMF), which is applicable to entities of all sizes and within all sectors, including the financial services sector. Several panels provided schematics, roadmaps, sample disclosures, charts, and graphs to demonstrate how risk assessment and risk monitoring might be accomplished.

Summary

C&L panelists were consistent in referencing prevailing rules governing the deployment of AI. Commonly cited were FINRA Rule 2210 (Communications with the Public), FINRA 2110 (Standards of Commercial Honor and Principles of Trade), FINRA 3110 (Supervision), and SEC Regulation Best Interest. I would consider these to be among the industry stalwarts. As such, I found this to be a compelling take-away, as it drove home the point that deployment of AI may be fluid and evolving, but the compliance obligations are set in stone.

 

The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity does not assume any duty to update any of the information. The information regarding AI tools provided herein is for informational purposes only and is not intended to constitute a recommendation, development, security assessment advice of any kind.

1141157.1.0