How enterprises can deploy agentic AI workforces without data limitations

By Justin Borgman, CEO Starburst.

It’s only human to be daunted by both the scale and speed of the shift to agentic AI. We are moving past simple chatbots and copilots toward a model where humans and AI agents will collaborate seamlessly to reason, decide, and act.

This is a defining moment. According to IDC, just under half of organizations will be orchestrating AI agents “at scale by 2030, embedding them across business functions.” However, the risk is substantial. Gartner is predicting that 60% of AI projects will be abandoned because the data isn’t ready, and 40% of agentic AI projects are expected to fail due to cost, unclear value, or inadequate controls and governance.

In the race for AI, the result for companies that fail to “prioritize high-quality, AI-ready data” is a predicted  15 percent productivity loss by 2027.

Going beyond suggestions to execution

The challenge is stark. Traditional GenAI models are mostly reactive and non-self-governing; they consume data, make suggestions, and humans must take the final action. Agentic AI, on the other hand, is proactive, specialized and autonomous. AI agents don’t just consume data; they generate new data, trigger workflows and execute tasks.

This level of autonomy raises the bar for data quality. AI is nothing without access to the right data, yet many enterprises are trapped in fragmented silos. 

When data is spread across different applications and infrastructure, particularly if they have grown by merger and acquisition, quality is uneven and context nonexistent.

Additionally, the rise of the black-box retrieval (or naive RAG) method has created a trust deficit. For an executive, an agent’s “judgment” is a liability if it cannot be traced back to high-quality data through clear data lineage and governance. 

Addressing the data sovereignty imperative

These technical challenges are compounded by a global tightening of data compliance and sovereignty. Recent developments in Europe show that data sovereignty is moving from a theoretical talking point to a real-world response to economic and geopolitical upheaval.

This is not just an issue for companies in highly regulated industries or doing sensitive work for governments. Tech leaders are facing a steady stream of regulations from governments spanning residency, privacy and security.

How can we begin to solve this data challenge?

In this complex data landscape, to move from experimentation to production, enterprises need more than a “bolt-on” compliance tool. They need a secure foundation where agents can access governed data, coordinate across workflows, and act within defined policies, also known as an agentic substrate.

This substrate is a federated, model-to-data approach. Rather than relying on massive data migrations, which often make data more vulnerable and create inefficient workflows, the data stays where it lives. It is the models and the applications that go to the data.

This must all be achieved while ensuring compliance, data security, and sovereignty. The human factor is critical here; the right people need to be able to make the right decisions at the right time, both in terms of business and in terms of compliance.

The human-in-the-loop factor

While the goal is autonomy, humans remain central to the agentic workforce. Technology leaders need to create an "AgentOps" team to track and manage agents’ performance, providing an audit trail for continuous improvement.

They also need to ensure non-specialists can use natural language to query complex datasets. This lowers the barrier to entry and adoption, allowing humans to make the key calls on compliance and sovereignty issues while the agents handle the “heavy lifting” of data wrangling.

This results in an AI and data strategy with sovereignty and compliance built in from the outset. It also means your systems are both secure and agile enough to accommodate further waves of AI innovation.

Conclusion

The forces behind the rush to agentic AI may appear overwhelming, but success depends on a foundation of trust. By building a data architecture that focuses on federated access and governed data products, enterprises will be able to scale their AI capabilities.

 

In an agentic age, sovereignty, compliance, and security cannot be compromised. It is time to stop moving data and rethink the models.

By Kirsty Biddiscombe, EMEA Business Lead AI, ML & Data Analytics, NetApp.
By Vijay Narayan, EVP and Americas MLEU Business Unit Head at Cognizant.
By Frédéric Godemel, EVP Energy Management, Schneider Electric.