Skip to content
AI

Dispelling common misperceptions about AI

Misperceptions about AI abound. Will it take our jobs? Replace human creativity? Have superhuman intelligence? (Spoiler alert: the answer is no!)

Artificial intelligence (AI) has captured the imagination of everyone, from college students to retirees, from early to late adopters of tech. AI often evokes scenes from movies—intelligent robots, futuristic sci-fi scenarios, and dramatic changes in the way we live and work. Luckily, these perceptions don't really align with reality. This blog explores and debunks some of the most common misperceptions about AI.

Misperception 1: AI will replace all human jobs.

The biggest and scariest misperception is that AI will take all our jobs. That’s not true. As you read the following misperceptions it will become clear why that is not the case. Yes, AI will likely automate certain tasks and streamline numerous processes, but it's unlikely to take our jobs and replace us. Instead, AI is more likely to augment human capabilities by handling repetitive and data-intensive tasks, allowing us to focus more on creative, strategic, and emotionally intelligent aspects of the work we do. Ideally, we get to get rid of the boring and keep the fun stuff.

Instead of replacing human jobs, widespread adoption of AI is likely to lead to the creation of new job roles related to AI development, maintenance, and oversight. AI has the potential to transform industries and jobs as well as create new jobs.

Misperception 2: AI is super intelligent and autonomous.

One of the most common misperceptions about AI is that it possesses superhuman intelligence and autonomy. If you read the press, AI seems capable of just about anything, including taking over the world. However, the reality is that AI algorithms are trained for very specific tasks such as image recognition, natural language processing, and/or data analysis. Funnily, AI currently can’t tell the time or even calculate the result of multiplying numbers. AI models’ capabilities are limited to the tasks they were trained for, and they don’t possess general intelligence that we as humans naturally have. AI can’t teach itself to do anything it wants—it is limited to where humans direct it via data and training.

AI systems are also not sentient beings capable of independent thought. They lack understanding, consciousness, and self-awareness. AI is a tool driven by data and algorithms that requires human guidance and oversight to function properly.

Misperception 3: AI is infallible.

AI systems are not infallible, despite what some may think. Nor are they immune to bias. The accuracy of AI models depends heavily on the data used in their training—garbage in, garbage out. There are several factors that should be considered when evaluating the data used to build AI models: quantity, accuracy, bias, diversity, curation, and timeliness, among others. If the training data is biased or incomplete, the AI system can produce biased results. Additionally, AI systems may exhibit what are called “hallucinations” in novel situations not encountered during training, or where training data was thin, and produce unexpected behaviors or errors. The one thing you can reliably expect is that AI can be confidently wrong—don’t be fooled.

Bias in AI can be a significant concern. If humans are not in the loop to carefully monitor and help mitigate, AI systems can perpetuate and even exacerbate existing societal and other biases present in the training data. It's essential to implement rigorous curation, testing, validation, and bias mitigation strategies when developing and deploying AI systems.

Misperception 4: AI understands.

Some believe that AI systems understand and comprehend information in the same way humans do. AI seems to do things we do, so we automatically assign it human traits. However, AI operates on mathematical algorithms and statistical models and lacks emotion, intention, desire, and even basic understanding. Think of AI as a bigger, better version of the text suggestion feature on your phone—it is simply using advanced statistics to predict the next mostly likely words or phrases.

AI doesn’t understand the text it is outputting and, as mentioned earlier, it can very confidently create incorrect outputs without knowing. AI systems can process, analyze, and see trends in vast amounts of data, but they don't understand the information in the way humans do. For example, image recognition AI doesn't "see" images; it is a math exercise of identifying patterns in pixel data.

Misperception 5: AI is a black box.

Another misperception is that AI systems are always impenetrable black boxes with decision-making processes too complex for humans to understand. In truth, AI is a blanket term that covers a variety of techniques that enable machines to mimic human behavior. The simplest AI includes rule- and logic-based programming, in which humans define the patterns, making it easier to understand how decisions are made.

Some AI models, like deep neural networks, can be much more complex. Efforts are being made to make AI systems more transparent and explainable by providing insights into how they arrive at decisions. Explainable techniques such as feature visualization, attention maps, and decision attribution are helping researchers and developers shed light on AI's decision-making processes, making it easier to understand and trust AI systems.

Misperception 6: AI can replace human creativity.

Creativity, the ability to imagine what isn’t and come up with an original idea, is uniquely human. Remember, AI uses existing data, determines patterns, and uses those patterns to produce outputs—it doesn’t create something original.

AI can serve as a tool for creativity, helping artists, writers, and musicians explore new ideas or automate certain aspects of their work. But, it doesn't possess the depth of imagination, emotion, or experience to compete with human creativity. It can be a useful tool for creativity, but is not a replacement.

AI is a powerful and transformative technology, with many misperceptions about its capabilities. Understanding the realities of AI is essential for responsible development and informed decision-making in an increasingly AI-driven world.

The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates. Fidelity does not assume any duty to update any of the information. The information regarding AI tools provided herein is for informational purposes only and is not intended to constitute a recommendation, development, security assessment advice of any kind.

1109568.1.0

Vall Herard

CEO
Vall’s expertise is at the intersection of financial markets and technology with extensive experience in FinTech, RegTech, InsurTech, capital markets, hedge funds, AI, and blockchain. Vall previously worked at BNY Mellon, BNP Paribas, UBS Investment Bank, Numerix, Misys (now Finastra), Renaissance Risk Management Labs, and Barrie + Hibbert (now Moody’s Analytics Insurance Solutions). He holds an MS in Quantitative Finance from New York University and a BS in Mathematical Economics from Syracuse and Pace Universities, as well as a certificate in big data & AI from MIT.

Check out our latest blogs

What Financial Advisors can learn from the SEC's Marketing Rule enforcement

What Financial Advisors can learn from the SEC's Marketing Rule enforcement

In enforcing the Marketing Rule, the SEC has focused on transparency and factuality regarding conflicts of interest, third-party ratings, a...

How AI-assisted entity resolution can help you reduce risk

How AI-assisted entity resolution can help you reduce risk

Learn how AI can enhance detection of bad actors, improve AML/KYC processes, and minimize false positives for your compliance team.

It’s time to take your AML compliance programs off autopilot

It’s time to take your AML compliance programs off autopilot

Financial criminals are turning to AI to exploit weak IT protocols and carry out cyber attacks—but firms can use AI tools to fight back.