Logo

Why AI Agents stumble at the starting line and how to get them on their feet

By Kirsty Biddiscombe, EMEA Business Lead AI, ML & Data Analytics, NetApp.

  • 2 hours ago Posted in

In the past year, many of my industry peers spoke excitedly about the promise of agentic AI. Tools are now capable of reasoning, planning and executing tasks autonomously, freeing teams to focus on higher-value work. AI agents were meant to take work off our plates, but many of us are now watching those plates drop instead. In fact, research shows that most agents (65%) are struggling to get from A to B without taking a wrong turn along the way.

However, it's important to stress that this isn’t a failure of ambition, or even of the capabilities of AI. Instead, it’s how we set up these agents. You can’t run a restaurant with just a microwave – you need skilled chefs, fresh high-quality ingredients, and excellent recipes. Similarly, AI agents need to be integrated into organisations effectively, trained on high quality data, and aligned with clear governance frameworks.

No recipes or ingredients means no results

Many organisations are treating AI agents like plug-and-play appliances; something that can be switched on, pointed at a problem, and then immediately create results. These approaches may explain such a high failure rate. For agents to perform as expected, they cannot be dropped into environments where processes are undocumented, data ownership is unclear, and success metrics are vague. If a task requires judgement calls that aren’t codified, or decisions that rely on tacit knowledge locked in people’s heads, an agent will fall short.

As a result, organisations need to do the hard systems work before implementation. This includes getting their data infrastructure in order by mapping out workflows end-to-end, defining the hand-off points between humans and agents, and establishing clear failure modes. Agent orchestration, fallback logic and escalation paths need to be designed at the very start, not bolted on as an afterthought.

Take care to avoid overfeeding your agents

When agents underperform, the instinctive response is to feed them more data. More documents, logs, historical context, dashboards and whatever else may be to hand. There is an understandable logic to this, but the reality is that AI agents do not benefit from data abundance in the same way that humans do.

Agents are only as reliable as the signal-to-noise ratio of the data they consume. Flooding them with redundant, stale, or low-confidence sources increases the likelihood of hallucinations or inaccurate

outputs. The human equivalent would be asking for directions and being presented with every map ever printed.

Instead, agents need precision. And in turn, organisations need to shift the focus from “more data” to “the right data” by curating inputs with deliberate intent. This includes identifying authoritative data sources, and ensuring that these are specific to the agent’s task. For example, an agent that supports employee onboarding probably does not need to have the company’s patent information included in its training. That said, even when organisations get the data selection right, a final barrier remains: confidence in the output.

Towards trust, visibility and agents we can count on

Leaders hesitate to rely on agentic decisions because they can’t see how those decisions were made, or where the data came from. That hesitation is justified.

This is where robust data management becomes non-negotiable. End-to-end data lineage, lifecycle management and metadata visibility can help turn agentic decisions from black boxes into auditable processes. With this visibility in place, IT leaders can set policy-based constraints, monitor behaviour, and validate outcomes against known inputs. Trust becomes something we deliberately engineer, rather than something to hope for.

AI agents are best thought of less like digital colleagues and more like highly specialized sous-chefs. If we give them spoiled ingredients and cluttered workspaces, then even the most talented chef will struggle to produce a gourmet result. The same is true with the data we feed agentic AI; the quality of the output is strictly limited by the quality of our pantries.

For IT leaders, success with agentic AI won’t come from piling on more data or chasing ever more capable models. It will be from engineering the foundations properly, with clear workflows, curated data, and visibility baked in from day one. That way, AI agents can become reliable force multipliers for organisations, or they can remain expensive experiments. And luckily, the difference is entirely within an organisation’s control.

By Vijay Narayan, EVP and Americas MLEU Business Unit Head at Cognizant.
By Frédéric Godemel, EVP Energy Management, Schneider Electric.