Blogs

Why AI Projects Fail and What This Means for the Future of Customer Experience

Why AI Projects Fail and What This Means for the Future of Customer Experience

16 September 2025

By Lianne Dehaye
Senior Vice President - TDCX AI

For all the business problems that AI and generative AI (GenAI) have been hyped to solve, a recent report from the MIT’s Project NANDA confirms a painful truth hitting corporate balance sheets: 95% of pilot projects failed to deliver value.

The report also reveals a paradox. The issue isn’t a lack of investment or slow adoption. In fact, they’re higher than ever. Companies are increasingly using the technology, but they often lack the strategic insight to operationalize the adoption and translate it into tangible value.

This is a reality we’ve already seen firsthand in our industry. Many brands are slapping an “AI-powered” or “GenAI-enabled” label on their digital customer experience (CX) solutions in the frantic rush to transform. However, this superficial chase for the badge of innovation, without building the necessary foundations, is also why the promised ROI never arrives.

The failures aren’t random. They trace back to foundational pillars that companies often neglect in their rush to innovate. 

Data quality makes or breaks AI for CX

An AI or GenAI tool is only as effective as the data it learns from. While many leaders treat data as a technical checklist item, it’s the bedrock of the entire strategy for AI. If that foundation is cracked, the initiative is built to fail.

The problem often begins with inaccuracy. Imagine training a conversational AI tool on thousands of past chat logs filled with transcription errors or miscategorized issues. The AI system doesn’t know the data is flawed. Instead, it learns from these mistakes and bakes these errors into its core logic, resulting in a chatbot or virtual agent that confidently misunderstands customers.

Just as damaging is outdated data. Customer behaviors and market trends are not static. An AI tool for customer service trained on historical data that no longer reflects current realities will be perpetually out of sync with the people it’s supposed to serve. 

Then there’s bias. When training data underrepresents certain demographics or contains historical prejudices, the AI system learns to perpetuate and even amplify those flaws. The result is a tool that can alienate entire customer segments, demolish brand trust, and create significant legal exposure.

When AI and GenAI for CX lacks data and hallucinates

Beyond data quality, AI initiatives also face challenges with the quantity of information and the risk of the model generating its own facts.

The first issue is data scarcity. AI models require vast amounts of data to learn effectively. If the business operates in a niche market or is launching a new product, it might not have enough data to train a robust model. The result is an AI tool that generalizes poorly and provides generic responses.

A more complex problem is hallucinations, which occur when a model confidently generates false information. This happens because these systems are inherently designed to match patterns, not to check facts. 

There are techniques that help mitigate this. There’s Retrieval-Augmented Generation (RAG), which forces the AI to use, for example, a company-approved knowledge base as a fact-checker. There’s also Reinforcement Learning from Human Feedback (RLHF), where human experts act as coaches to continually correct and improve the AI system’s responses. In CX, these methods prevent chatbots and other AI tools for customer service from responding with incorrect or made-up answers. These techniques, however, also demonstrate the need for human oversight to ensure that the AI systems stay grounded in reality.

The costs of integration issues and technical debt

The AI model itself requires constant monitoring for performance issues. It can overfit to past data, making it unable to handle new customer scenarios. It can also underfit, making it too simplistic to capture important nuances in feedback. Over time, all models suffer from model drift, where their performance degrades as customer or user behaviors change, rendering the initial investment obsolete if not actively maintained.

These challenges are amplified when integrating the AI with existing systems. For example, plugging a sophisticated AI tool for customer support into outdated, legacy customer relationship management systems (CRMs) is a common point why AI projects fail, as older infrastructure often can’t handle the processing demands. Similarly, data silos prevent the AI system from getting a complete, unified view of the customer, which is essential for delivering personalized experiences.

The rush to launch also creates technical debt, the long-term cost of short-term development shortcuts. As this debt accumulates, the system becomes brittle and difficult to update, stalling future innovation and eroding the AI solution’s effectiveness over time.

The role of organizational culture and the human factor

Beyond technology and data, an AI initiative can be brought to a grinding halt by the most complex variable: people.

The challenge begins at the top with C-level buy-in. Without a unified vision and a concrete strategy for ROI, AI projects become expensive experiments that are quickly starved of the significant financial and human resources they need to succeed.

This disconnect in leadership creates a ripple effect throughout the organization and breeds cultural resistance. Employees understandably fear their roles are on the line, a legitimate concern when leaders push for adoption faster than people are comfortable with. Addressing this requires transparent communication and training that reframes AI not as a replacement, but as a collaborator and an enabler.

Even with willing employees, a critical skills gap often undermines progress. Many organizations lack the in-house AI proficiency and data literacy to properly manage these new tools. For instance, if a team can't interpret an AI tool’s customer sentiment analysis or questions its outputs, the technology becomes an opaque “black box” that nobody trusts or uses effectively.

This underscores the necessity of keeping humans in the loop. An AI tool cannot be held accountable, understand complex emotional nuance, or make a judgment call on a sensitive issue. Human oversight is essential to counter bias, ensure accountability, and handle the interactions where expert judgment makes the difference.

Is your AI and GenAI for CX solving a real business problem?

Impressive technology and tangible business value are not the same thing. Many executives are captivated by dazzling product demos that showcase incredible capabilities. This wow factor, however, often causes them to skip the most fundamental question: Does this actually solve a core business problem for our organization?

When a solution is adopted without a clear problem to solve, it becomes an expensive experiment. It magnifies the very foundational challenges of poor data, technology maturity, and a lack of human readiness. 

AI is not a plug-and-play technology. Successfully deriving value from it requires an operational discipline that, in turn, needs the very capabilities that defines digital customer experience: rigorous data curation, constant process optimization, and a deep bench of skilled, human talent. The most important question for leaders, then, is not which AI-powered CX solution to buy, but who to partner with to enrich a human-in-the-loop ecosystem that helps deliver real business value. The future of AI and GenAI for CX won’t be led by the technology alone, but will be driven by the experts who know how to wield it.

전문가와 상담하세요