GENERAL AI

20 Reasons Why AI Project Fails

HARI THAPLIYAL | December 30, 2021

Why does AI Project fail?

I've worked in IT project management for over two decades. I've been in the industry for a long time and have seen the rise and fall of project success trends in various industries. I've seen several trends in AI initiatives and their failure rate over the previous few years. I'm attempting to describe the reasons for AI project failure based on my understanding. I am confident that many of the topics will strike a chord with you.

1) Complex Systems:

We aspire to emulate complex human behavior, but we are only at the beginning of our AI adventure. Foundational components and hardware are still in the works. For example, detecting cancer does not rely solely on a scan or X-ray to determine whether or not a person has cancer. We cannot conclude that there is a thief based solely on few video broadcasts. To make a decision, many inputs and human judgments are required.

2) Not Breaking a Complex Systems:

We are unable to deconstruct complex human mimics to the point where we can say what hardware, software, edge devices, and algorithms are required to handle this. You may require a text-to-speech converter, for example. We are all aware of how intricate the human speech process is in terms of speed, tone, pronunciation, pitch, and so on. It is difficult to give text to the system and expect it to deliver human-grade reading or recitation. We need to divide the problem down into smaller pieces, and each piece is a complex project in itself.

3) Expectations:

We are unable to persuade users that it is not a replacement for their intelligence and senses. Neither considers it to be an extension of your intelligence or sense organs. We need to persuade people that, as of today, AI systems are available to assist you in working more efficiently, productively, and enjoyably. Whether it's self-driving cars, border security, or medical surgery, AI is there to help, not replace humans.

4) Human Emotion:

When a human fails, the manager has the option of teaching him, scolding him, warning him, firing him, or handing him over to the authorities. This helps reduce the manager's frustration, and he moves on. However, when AI systems fail, managers feel helpless. At most, he discontinues use of the product.

5) Data Volume & Cost of Data:

A large amount of data is required to train a model for high accuracy. However, due to data protection and privacy rules, it is not widely accessible. It can be quite costly at times.

6) Unstructured Data:

Data is often dispersed or in poor condition. It takes a significant amount of money and effort to obtain it from various sources, clean it, and test it before determining whether or not the objective can be satisfied.

7) Missing Skills:

Statistics are not well understood by data engineers and data analysts. As a result, they follow mechanical steps but are unable to peer inside the dataset and comprehend the business or statistical significance of the data.

8) Poor ROI justification:

Sponsors are unclear about the impact and the trade-offs, changes to established systems, and reliance on humans that will be required. Salespeople paint lovely visions, and reality is frequently far from the sales presentation. Sometimes reality is quite good, but the sponsor believes it is too expensive.

9) Volatile AI Market:

Because of their significant investments in AI research, cloud services from major players such as Google, Microsoft, Amazon, IBM, and others are evolving at a rapid pace. Because this market is extremely volatile. We don't know whether the product developed today will be valuable tomorrow.

10) Expensive Hardware:

GPU and TPU are still very expensive, and not everyone can afford them to experiment with ideas.

11) Models not deployed for consumption:

A diverse set of abilities is required for model deployment, monitoring, and model performance feedback. The majority of the time, after a model is created, it is passed on to the Software Development and DevOps teams. It creates the inaccuracy, model performance deteriorates without the AI team's knowledge, and the project fails.

12) Model Retraining:

Depending on the utility and nature of the model, it must be retrained on a regular basis. Some need to be retrained every week, some monthly, some quarterly, and some not at all. When the model is not performing as expected, it must be retrained. Retraining and redeployment of the model necessitate additional sophisticated stages, and there aren't enough experienced personnel available.

13) AI or not AI:

Sponsors are unsure whether a problem can be solved with rule-based application programming and whether an AI application is required. AI applications mean retraining, however in the case of rule-based applications this is not needed. For a supplier, it may be a business opportunity but sponsor it is a waste of money.

14) Missing holistic approach:

Before recommending any AI system or component in any organization's complex system, system analyzer, proposal maker, and decision-maker must view the big picture. AI systems are never a panacea. It is addressing a specific issue in a massive process flow. We need to know what bottlenecks are. What's coming down the pike? Which process/user will consume the model's output?

15) Corrupt input data for prediction:

The project team performs extensive data cleansing, data transformation, and feature engineering prior to training a model with data. When a model is implemented in production, we must ensure that the model is fed the same type of modified data for prediction. Many times, transformation is not done correctly, or the transformation stages are incapable of dealing with outlier data values. As a result, inaccurate forecasts are made.

16) Scale issues:

Models can sometimes perform well if they are required to make a few hundred predictions per day. No matter what kind of hardware architecture you employ for these models, if the right algorithms are not used to train the model, it will fail if it is required to predict 1000 times per second.

17) MLOps Processes and Tools:

Thousands of combinations of hyperparameters are experimented with to produce thousands of models throughout model development, and data passes through the various transformation steps. Finally, we choose and implement the best model. There aren't enough good tools for feature versioning, data versioning, hyperparameter versioning, model versioning, and measuring the performance of different model versions.

18) In-house Development:

When data is too sensitive and secure, such as a credit card, health records, income, security-related information, and so on, and we lack internal model development and maintenance expertise, data must be supplied to third parties. Before you hand anything over to third parties, you must do a lot of planning and data crunching. This is costly and time-consuming, and many projects fail to take off.

19) Sponsors need to think beyond cost-cutting and increasing profit:

We must also look to AI technologies to boost customer loyalty, customer satisfaction, employee stress, convenience of conducting business, and so on. If these are not the metrics, we may start an AI project with the wrong goal and pronounce it dead after a while.

20) Overselling:

The technology employed to tackle the problem is unimportant to the end-user, customers, or sponsors. It doesn't really matter whether or not AI exists. It is a technical word; the engineering team should consider if the technical team wants to address a problem with or without AI. Without using the term AI, if a system assists an organisation, that is sufficient. People's expectations raise dramatically when the AI tag is affixed to a system. When these expectations are high, it is difficult to manage them even when the best system is delivered.
Become a contributor

Spotlight

Mu Sigma Inc.

Mu Sigma (www.mu-sigma.com) is a category defining Decision Sciences and Big Data analytics company helping enterprises institutionalize data-driven decision making. Our unique interdisciplinary approach and cross-industry learning drive innovation in solving high-impact business problems across marketing, risk and supply chain. With over 3500 decision scientists and experience across 10 industry verticals, Mu Sigma has been consistently validated as the preferred Decision Sciences and analytics partner. We provide an integrated decision support ecosystem of products, services and cross-industry best practice processes transforming the way decisions are enabled in enterprises for more than 140 Fortune 500 clients.

Spotlight

Mu Sigma Inc.

Mu Sigma (www.mu-sigma.com) is a category defining Decision Sciences and Big Data analytics company helping enterprises institutionalize data-driven decision making. Our unique interdisciplinary approach and cross-industry learning drive innovation in solving high-impact business problems across marketing, risk and supply chain. With over 3500 decision scientists and experience across 10 industry verticals, Mu Sigma has been consistently validated as the preferred Decision Sciences and analytics partner. We provide an integrated decision support ecosystem of products, services and cross-industry best practice processes transforming the way decisions are enabled in enterprises for more than 140 Fortune 500 clients.

RELATED ARTICLES