Blog

Responsible innovation for AML/KYC compliance—have the courage to explore AI

Written by Jon Elvin | Jun 26, 2024 2:54:55 PM

The problem of financial crime and its social impacts are vast

The harsh reality is that long-standing estimates measure money laundering at 2-5% of global GDP. Additionally, certified fraud examiners (CFEs) estimate that organizations lose 5% of revenue each year to fraud. Fraud and money laundering also have negative social impacts that can threaten the overall stability of developing nations.

The Bank Secrecy Act (BSA) has been around since the 1970s, supplemented with new anti-money laundering (AML) rules and requirements each decade; but money laundering still occurs at startling numbers. Additionally, financial institutions spend billions of dollars and employ thousands of professional staff to help combat financial crime. While the efforts of the dedicated crime fighters should be commended, discussion still centers around the overall effectiveness of such efforts, and if we are achieving “impactful disruption” of measure.

With so much at stake, it is often a game of leapfrog between bad actors and crime fighters. New technology appears and is wielded by both sides—every measure, countermeasure, with technology that can be used to help both.

If you are an Anti-Money Laundering (AML) Risk Executive, a Chief Compliance/AML Officer, or Fraud Executive, you likely want the pace of crimefighter tool acceleration to advance with orders of magnitude. That can’t happen without some level of courage to try new solutions.

Solutions and new technology advance rapidly

Crime and crime fighting are not new. Advances in technology have delivered amazing solutions over the last 100 years. Yet, many seasoned practitioners have concluded that we’ve only achieved marginal success, akin to treading water. Therefore, it would be foolhardy to not continue to creatively explore available technologies to determine which are most impactful.

Remember, at one point, AML and fraud investigators used colored pencils, graph paper, and small teams of former wire room professionals to investigate. I’m sure they were amazed, yet hesitant, when they began to explore what link analysis databases, spreadsheets, and transaction monitoring algorithms could do to up their game. We’ve certainly come a long way, but the search for better tools and results should be never-ending.

I’ve seen industry practitioners accept years of “less-than-ideal” solutions across their ecosystem—some with an aura of acceptable complacency and outsized costs. With AI being adopted with alacrity by the bad guys, any lack of progress on the side of the good guys is becoming more and more dangerous.

Much to learn

Of course, we have and must continue to choose our technology solutions responsibly. Artificial intelligence (AI) is no different—and some might argue more important—given it can, at times, seem like a “black box” with challenges of explaining the underlying activities to regulators and executives. Buzzwords like “artificial intelligence,” “machine learning,” “large language models,” and “natural language processing” are relatively new to the AML compliance industry. Professionals at all levels are still learning and sometimes are slow to adopt.

I often hear current and former executives describe AI as being “interesting”—appearing more skeptical of the marketing spin than truly impactful solutions. They tell me it is a common topic for their senior management discussions of strategic technology and process improvement plans. Yet, from practitioners to board members, they seem to not completely understand AI, dabble in it only as a hobby, and are sometimes afraid to ask questions. This needs to change.

Certainly, the industry has a fair amount of marketing spin on proclaimed AI results. Know that while some might be an illusion, tangible results in several areas have been achieved. The challenge is to educate yourself and your organization, identify opportunities, test within your own organization, keep a human in the loop, gain quick wins that can be expanded, and move on to solve the next pain point.

Common barriers to overcome

Many top executives say they want to pursue helpful AI initiatives and most practitioners would acknowledge they really should be doing this, but both groups can be hesitant for a variety of reasons.

Sometimes the mindset of the Chief BSA, AML, or Compliance Officer can be an inhibitor to the innovation journey. For the new or general program caretaker, the mindset can often be to not make waves, or wait and earn their regulatory confidence before they embark on novel improvements. The problem is they often may never achieve that point as something new pops up or leadership changes, and the cycle can start all over again. I’ve also seen some longstanding, successfully administered programs become complacent with a false sense of security, becoming slow to embrace process improvement and preferring the “safety in the pack” or the “regulators are comfortable with us” mentality. Programs may not keep pace with this mindset, and if not moving forward can eventually fall behind.

I often hear that compliance professionals are afraid to rock the boat with their current system, fearing that change might signal some fault in their existing process and trigger scrutiny. Some industry practitioners are also reluctant to admit that certain technology systems/practices that were okay 5-10 years ago need to be replaced or upgraded; they are hesitant to ask for budget and/or often lose out in the prioritization roulette of multi-year technology plans.

Regulators should not be seen as a barrier to AI adoption. In fact, the regulatory regime encourages responsible innovation and, in some channels, has even created formal protocols for listening and establishing office hours for innovation forums. While the regulators would undoubtedly never endorse a specific tool or product, they are open to new ideas and would want responsible innovation. Experimenting, finding new ways that balance effectiveness and efficiency, or further validating that existing practices work is healthy.

White paper download → Mitigating risk in the digital age: a roadmap for AI-enhanced adverse media screening 

Many firms intend to create a roadmap and generate new ideas and hypothesis, but often have difficulty completing the journey. The realities of administering an effective anti-money laundering and financial crimes program can slow down their momentum. World events, budgets, organizational priorities, regulatory scrutiny, liability (for AML Officers), employee buy-in, and skepticism can trigger apprehensiveness.

With these challenges and uncertainty of where to start, many just don’t; but that has to change.

Experimentation can create confidence

Since there is not a formal, proven recipe book, user guide, or paint by the numbers that has all the answers, acknowledging the importance of experimenting is a key first step to moving forward. The industry is often hesitant to embrace experimentation. Yet, challenger vs. incumbent model experimentation, including effort curve analysis, is a healthy activity financial crime executives should pursue. In addition to identifying significant opportunities for advances in control effectiveness and efficiency, the results and summaries can be strong proof points to share with regulators and senior management. Board members should be asking more probing questions and expect to see conclusions from these types of exercises. With proper measurement, everyone involved can gain greater comfort in how the AML/fraud programs are working and if additional investments are needed.

Confident leaders should willingly embrace the “challenger versus incumbent solution process.”

I’ve seen successful firms establish a financial crime and compliance innovation team. Consider standing up a sandbox utility to specifically explore and incubate solutions to common problems or pain points. Include cross-channel expertise of compliance, tech, operational and data professionals; give them broad guardrails, and let creativity drive direction. Don’t overburden the function with rigid controls while in the lab. Give them broad sway in ideation—the production control safeguards and operationalization can come later. Let those charged with administering a financial crimes program know that responsible innovation is the expectation.

Final thoughts

The potential to achieve extraordinary results (effectiveness, efficiency, and employee experience) while balancing the customer experience and privacy is possible and worth the pursuit. I encourage those in the industry to have the courage to explore responsible AI and innovation. The best time to start was yesterday, but tomorrow is not too bad either. Don’t wait for an enforcement order, negative reputation event, or financial loss to be the catalyst to initiate innovation.

Realize that you don’t have to be an expert in AI to begin the journey. Take comfort that not every experiment will work exactly right the first time, but the journey can lead to incremental improvements and new creative discoveries.

Truly impactful solutions with better outcomes, effectiveness, efficiency, and controlled cost are being discovered and will become mainstream. Be part of that process.

Learn what to look for in AI-based adverse media screening tools: Mitigating risk in the digital age: a roadmap for AI-enhanced adverse media screening.

 

The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. The information regarding AI tools provided herein is for informational purposes only and is not intended to constitute a recommendation, development, security assessment advice of any kind.

1148554.1.0