Self-cannibalizing the underlying methods of AI models is critical for an AI product to succeed in the marketplace. But why is it so important to constantly adapt AI products? It has to do with how AI products are made.
One of the major differences between traditional software engineering and AI development is when the development process is considered complete. In traditional software development, the goal is to solve for a functional specification through logical coding. Development is considered complete once the software is built.
AI development, on the other hand, requires a shift in mindset. In AI, the goal is to optimize for a specific business metric by learning from data. The use of the term optimize indicates the ongoing process of improvement through iterative developments. When a model is evaluated to be 91% accurate, it means that there are still opportunities to fine-tune it. Making those improvements becomes a continuous process.
|TRADITIONAL SOFTWARE DEVELOPMENT||AI DEVELOPMENT|
|GOAL||Meet a functional specification||Optimize a business metric|
|QUALITY||Depends only on code||Depends on input data, training method, and tuning parameters|
|SOFTWARE STACK||Typically pick one software stack||Compare many libraries, models, and algorithms for the same stack|
Regularly cannibalize your models
Machine learning (ML) models often undergo various transformative changes over the years. Sometimes, the changes are a result of retraining as and when sufficient new data are available. But sometimes, it’s more than retraining. Change may be necessitated because of a new algorithm or new architecture that outperforms previous methods on accuracy, latency, or generalizability.
The algorithms behind ML models are changing rapidly. It’s possible to have invented a method that delivers the highest accuracy (but not 100%). Yet there is no guarantee of lifelong immunity or success through that novel method. One should continually evaluate and explore all the possibilities for improvements and cannibalize models to realize improvements in accuracy, latency, and generalizability. If you don’t upgrade your AI models, someone else will—and it might be your competitor.
The moral of the story? Don’t get attached and don’t get complacent. If you want your AI product to live, be ready to scrap your existing methods when the time comes and upgrade to more capable methods. It is one way to stay competitive and disruptive in the ever-evolving AI world.
AI is constantly being disrupted
To give you some idea of how disruptive the field of AI is, here are just a few examples of newer techniques that superseded previous methods:
- Neural network-based solutions replaced statistical-based methods for machine translation, rendering decades of research in statistical methods less relevant.
- Neural networks also replaced traditional ML methods for unstructured data, rendering the handcrafting of features not as useful in most cases.
- Transformer-based models, especially BERT and GPT, replaced long short-term memory networks (LSTMs) after 2018, particularly for text data.
- Diffusion-based methods replaced generative adversarial network (GAN)-based methods for image generation, marking one of the biggest innovations in 2022.
- With zero-shot-based and few-shot-based capabilities, model building is becoming unnecessary in many cases. This is driving the adaption in GPT-3 APIs from OpenAI.
The above list is a very small subset of the disruptions from the last many years. This gives you some idea of how quickly the field changes, thereby rendering almost all prior work obsolete. It is crucial to stay flexible. Owners of AI models need to decide when, not if. Retiring old AI methods and replacing them with new ones could be the key to your product’s success.
The opinions provided are those of the author and not necessarily those of Fidelity Investments or its affiliates.