Skip to content
AI

Trends in AI: What’s next for LLMs?

As LLMs continue to advance at warp speed, I am following trends in marketing impact, domain-specific versions, better alignment, and more.

Wherever I go lately, Chat-GPT (the best-known large language model, or LLM) is dominating conversations. While it is interesting to hear the topic discussed vigorously over dinner by a college freshman and a retiree, both making great points, I think there is a lot more to consider as we strive to improve the technology. Here are some LLM trends that I see.

Marketing impact

LLMs have the potential to greatly advance marketing efficiency and effectiveness due to their ability to provide more precise qualitative assessments of customer interactions. Marketers will be able to make more informed decisions about the marketing mix, targeting, and other factors that impact their success in engaging and selling to prospects and retaining clients. 

LLMs are poised to have a significant, positive impact on content creation and are on track to help deliver better content more efficiently. Given well-written prompts, they can provide a good first draft. Editors can then add personality, tone, and details about the brand. LLMs are like having a personality-less, junior copywriter who gets confused every so often.   

Domain-specific versions

Most current models were trained on large volumes of public internet data. When used for very specific purposes, such as in the medical, pharmaceutical, or financial fields, they hit their limits and can sometimes hallucinate. Remember, LLMs are just using advanced math to guess what the next most likely word/phrase is—think of them as an advanced version of the predictive text system on your phone. When the training data is shallow in an area, LLMs can do a poor job of accurately predicting. 

Many industry players are working to create models based on data that is more industry specific and incorporates private data. These domain-specific LLMs will be better suited to tailored tasks within those industries and companies.

White paper download | Considering AI solutions for your business? Ask the right questions.

Another example is in regulated industries where the more general LLMs might produce language that doesn’t comply with the regulations. For example, an LLM could write about Roth IRAs and do a good job of describing the benefits and differences with Traditional IRAs; but it might conclude by saying, “Roth IRAs are a good investment.” That statement would not comply with communications rules that are designed to protect the public. Domain-specific LLMs will be able to create content that is more in line with regulations.

Better alignment

Current LLMs aren’t necessarily aligned with core human morals and values. AI models don’t have judgment or reasoning the way humans do. They don’t understand what they are doing or saying, and therefore don’t understand when they might have crossed a line or when they might be taking a path that could ultimately be dangerous to humans. 

Ethics and safety will always need to be addressed when developing AI models. Some alignment issues are a function of biases in the training data. But others might be harder to address.

I believe coming advances in LLMs will bring better alignment to help models avoid direct or indirect negative impacts on economics, policies, or society. This will be increasingly important as AI becomes involved in more and more tasks.

Self-critiquing versions

One of the complaints about LLMs is that they can just get it wrong, as mentioned above. But, what if LLMs could check each other’s work? I think that in the next few iterations, LLMs will have a rudimentary ability to perform statistical reflection. They will evolve to be capable of self-critiquing. We will be able to use an LLM to examine whether the output of another generative model is “on the right track” during generation. This should help lead to better LLMs and reduce hallucinations and safety/ethical concerns.

Modular models

Current LLMs are unable to solve complex tasks that humans find easy. Modular models are the next step. They chain sub models together to complete a single task. Think of the “are you a human” authentications: how many bridges/crosswalks do you see? Only humans can solve due to the steps: identify, count, and sum. That is changing.

There are many trends happening at warp speed, so this list will change in six months, or maybe six weeks! What are the trends that you see? Would love to discuss.

Are you considering AI solutions for your business? Make sure to ask the right questions.

 

The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity does not assume any duty to update any of the information. Fidelity and any other third parties are independent entities and not affiliated. Mentioning them does not suggest a recommendation or endorsement by Fidelity.

1082759.1.0

Vall Herard

CEO
Vall specializes in the intersection of financial markets and technology and has a mastery of emerging methods like AI, machine learning, blockchain, and micro-services. He has a proven track record of taking companies from ideation to scale on a global basis within FinTech and financial services.

Check out our latest blogs

The state of AI—agentic—and where I see AI headed from here

The state of AI—agentic—and where I see AI headed from here

Learn the evolution of AI from large language models to agentic AI, emphasizing planning, tools, and regulatory compliance for solving busi...

A roadmap for implementing AML compliance for IAs

A roadmap for implementing AML compliance for IAs

My short guide to implementing AML compliance for IAs covers key components, regulatory requirements, and practical steps for effective pro...

Saifr’s mission: Make AI safer

Saifr’s mission: Make AI safer

Here's how our collaboration with Microsoft aims to enhance compliance in the financial services industry through innovative AI models.