Skip to content
AI

Surprising survey data on who is (and isn’t) using AI

Our research revealed a surprising difference in AI usage between top executives and junior managers. Learn how they're using the technology differently.

Artificial intelligence (AI) use in financial services is not new. Like other industries, financial services firms have been using machine learning and AI-driven analytics tools for a decade or more to crunch calculations and analyze data. But with the advancement of publicly available generative AI tools over the past year, artificial intelligence use has gone mainstream. Now, seemingly everyone from a CEO to a client service associate is looking for the best ways to use AI tools to offload tedious tasks, speed complex workflows, boost creativity, and uncover new customer insights. Have to write a customer letter? Just prompt an AI for a draft. Need an image for a marketing campaign? Use an AI-based generator. There is no doubt that AI use is accelerating.

In fact, McKinsey & Company noted in a recent report that experimentation with generative AI tools is “already relatively common,” and “60 percent of organizations with reported AI adoption are using gen AI.” So, we wondered how are financial services marketing and compliance teams using AI—generative and otherwise? To find out, we surveyed 107 marketing and compliance leaders at financial services firms across the U.S. Roughly two thirds (66%) said they currently use AI in some of their processes. Drilling down into our survey data, however, we noticed something interesting—there is a big difference in who is using AI and how.

Junior managers are hands-on with AI

We asked junior-, mid- & senior-, and executive-level managers, “Are you currently using any AI in your processes or tools?” Half (50%) of the top executives surveyed said they were using AI, while 83% of junior managers responded in the affirmative. That’s a big discrepancy, suggesting that C-suite leaders’ perceptions of how AI is being used may differ from the reality of when and how workers are actually using it.

In fact, top ways junior managers say they are using AI is for processing and analyzing data, including predicting customer behavior (20%), generating human-like text responses (20%), and personalization (10%). None of the executives surveyed told us their main use of AI is for processing and analyzing data. And only 6% of top leaders said they were using AI primarily to generate human-like text responses or for personalization. A curious difference in responses.

Get the research → AI insights survey: Adopters, skeptics, and why it matters.

The comments we captured from our survey group also highlight these differences. For example, we asked respondents who are using AI, “You said you currently use AI in your processes or tools. What is the main reason you use AI?” Junior-level managers responded with detailed descriptions, such as:

“I can perform sentiment analysis on text inputs to understand the emotional tone or sentiment expressed in the text.”

– Junior Manager / Team Member, Risk / Compliance

 

“We utilize AI primarily to enhance our data analysis and customer insights. AI helps us make informed marketing decisions by crunching vast amounts of data, identifying patterns, and predicting consumer behavior. This, in turn, allows us to tailor our strategies and campaigns for maximum effectiveness.”

– Junior Manager / Team Member, Risk / Compliance

 

“AI powers my ability to generate human-like text responses.”

– Junior Manager / Team Member, Marketing

 

Responses from executives were a bit more general:

“Improved accuracy and decision making.”

– Managing Director, Marketing

 

“It helps in verifying a customer’s identity.”

– Managing Director, Marketing

 

“It reduces risk.”

– Board Member, Marketing

 

“AI-driven chatbots provide instant assistance to our users 24/7.”

– Managing Director, Risk / Compliance

 

The responses suggest that executive leaders have an overall understanding of why their organizations are using AI, but junior-level managers, who are more closely aligned with day-to-day operations, better understand the specifics of how and when AI is being used.

Individuals may be using AI outside of IT policy

The differences in data between junior and executive leaders also suggest junior managers are more apt to test out new AI capabilities in their day-to-day work, whether sanctioned by the organization’s IT department or not. It’s understandable. The output of today’s AI tools, especially generative AI tools, is sophisticated and human-like, and the ultimate end user of that output (a customer, for example) may not realize the work was created by AI. On the one hand, that’s progress. On the other, it’s a potential risk—especially for an organization. An article in Fast Company this past October called attention to the risk that “more and more workers use [AI] models that have not been authorized for safe use by their employer, and risk data security in the process.”

Individual trial and error with AI could ultimately benefit an organization through the discovery of newer, more efficient ways to work. But if junior managers or their teams are experimenting with AI tools without their IT department’s knowledge, they could also introduce shadow AI into the organization, which creates a number of risks, such as:

  • Operational: disruptions and errors in processes from an inadequate understanding of AI systems. If AI algorithms are responsible for critical functions, such as fraud detection or risk assessment or a myriad of other uses, a lack of oversight could expose the organization to operational breakdowns.
  • Regulatory: violations of data privacy, security, and fair lending practices. For example, exposing proprietary customer or organizational data in public generative AI models. Regulatory bodies are increasingly scrutinizing AI implementations, and non-compliance can result in severe penalties.
  • Ethical: AI models could have inherent bias, and using certain models for decision making could lead to the unfair treatment of certain customer groups. This can result in reputational damage and legal consequences.

Executive leaders need to close the learning gap on AI

Top executive managers typically don’t get involved with the minute details of day-to-day operations. They may also be unaware of the specifics of all their organization’s software implementations, but they should be conversant on who is using AI, how it is being used, and for what purposes, so they can work with IT leaders to assess the risks of each use case and develop policies based on what is learned. McKinsey & Company identifies nine actions leaders can take to understand the value of new AI tools, such as generative AI. At the very least, executives should work across the organization to help ensure collective visibility, education, and guidance on AI. When C-suite leaders don’t understand AI, that may mean that they also don’t understand AI’s risks and may not be well-positioned to make good decisions regarding the adoption of new AI-based technologies.

Gather your data and harness the power of AI across your operation

If our survey is any indication, a solid majority of junior-level financial services marketing and compliance managers are using AI, whether allowed or not. It’s the responsibility of top leadership to make sure they are using it ethically. But don’t just take a “no AI here” approach to the topic. Be pragmatic. Document who across the organization is currently using AI, how they are using it, and the ways people have found AI to be the most helpful. Be aware that some of this usage could be from AI that has been added to augment the tools already in use. The goal is not to punish users of shadow AI, but to level-set on exactly what AI capabilities people are relying on to help them with their work.

By talking with folks from all levels of the organization, leaders can get a more accurate view of how AI is being used today and its potential for the future. Bringing current users like junior managers into the AI roadmap conversation can also help ensure that organizations are working collaboratively across roles and departments to develop compliant and ethical AI-focused policies and plans that benefit workers, customers, and the overall organization.

Get more AI insights from our survey

Download our ebook, AI insights survey: Adopters, skeptics, and why it matters for more details on how financial services marketing and compliance teams are using AI.

 

1136253.1.0

Allison Lagosh

Head of Compliance
Allison currently serves as the Compliance Advisor and Director for Saifr within Fidelity Labs. She previously was a management consultant focusing on data validation and conversions, disclosure design, and regulatory expertise for the Saifr team. Allison has extensive experience in the financial services industry with various legal, compliance, risk, and marketing compliance positions. Most recently, she was a Vice President for State Street Global Marketing, where she led the Risk Management and Controls Governance Program and advised on Marketing workflow tool management. Allison also worked at various senior compliance and marketing manager positions at Columbia Threadneedle, MFS, and Fidelity Investments.

Check out our latest blogs

Compliance concerns: Workloads, knowledge gaps, and separate systems

Compliance concerns: Workloads, knowledge gaps, and separate systems

A Saifr survey revealed the top concerns of compliance departments in financial services: workloads, knowledge gaps, and internal systems.

Q1 2024 regulatory roundup: all things AI and enforcement

Q1 2024 regulatory roundup: all things AI and enforcement

Q1 saw continued regulatory focus on artificial intelligence, with recent enforcements indicating AI is a regulatory priority.

Lessons learned in AI risk management from Compliance Week 2024

Lessons learned in AI risk management from Compliance Week 2024

The hot topic at Compliance Week 2024 was AI risk management. To combat AI risks, compliance officers can implement frameworks, policies, a...