Back to insights
    AI Agents

    What makes it hard to integrate AI agents into enterprise systems?

    Many companies are exploring AI agents right now. The idea sounds compelling – but in practice there is a significant gap between an impressive demo agent and a stable system in everyday business. This post explains what projects really fail on.

    2026-05-168 min

    The gap between demo and production use

    Many companies are currently exploring AI agents. The idea sounds powerful: a system that doesn't just provide answers, but independently completes tasks, brings together information from different systems, prepares decisions, and automates processes.

    In practice, however, it quickly becomes clear: there is a significant gap between an impressive demo agent and a stable agent in everyday business operations.

    An AI agent deployed internally needs access to data, applications, roles, permissions, process logic, and often personal information. It must work alongside existing systems, be reliable enough, remain controllable, and fit into users' working routines.

    This is where many projects fail – not because the AI models are too weak, but because the environment is not yet ready. The MIT/NANDA Report describes this gap very clearly: the problem lies not primarily in model quality, but in companies' ability to meaningfully integrate AI into processes, systems, and ways of working. Only a small fraction of integrated GenAI pilots generate measurable economic impact. Source: MIT / NANDA Report 2025

    What are AI agents, exactly?

    A classic chatbot answers questions. An AI agent goes a step further.

    It can pursue a goal, plan intermediate steps, use tools, retrieve data, trigger actions, and review results. An agent could, for example, analyse incoming emails, retrieve relevant customer data, draft a reply, check calendar availability, and create a follow-up task in the CRM.

    That sounds like automation with superpowers. But these very capabilities are what make integration demanding. Because the moment an agent doesn't just read but actually acts, new requirements arise around security, data quality, permissions, traceability, and process design.

    An AI agent is therefore not simply "ChatGPT with access to company data". It is more like a new layer between people, data, and operational systems.

    1. Existing systems are often not built for agents

    Many enterprise systems were built for human users. People click through screens, review intermediate results, interpret edge cases, and know from experience when a process isn't running cleanly.

    AI agents, by contrast, need clear interfaces. They require APIs, clean data models, stable permissions, and unambiguous process logic. In many companies, exactly that is not consistently available.

    In mid-market companies especially, the reality often looks like this: Microsoft 365, an ERP system, a CRM, network drives, SharePoint structures, specialist applications, Excel spreadsheets, and historically grown workarounds. Much of it works day-to-day, but not necessarily in a way that an agent can reliably build on.

    A US enterprise survey by Tray.ai illustrates how pressing this issue is even in larger organisations: 86 percent of enterprise technology professionals surveyed assumed their tech stack would need to be modernised to deploy AI agents. Source: Tray.ai Survey

    This is a critical point: AI agents rarely fail at the first prompt. They often fail at the question of whether they can cleanly access the real systems.

    2. Data quality is the underestimated bottleneck

    Agents are only as good as the information they work with.

    If documents are outdated, SharePoint structures are poorly maintained, processes exist only in the heads of individual employees, or important information is scattered across emails, an AI agent cannot build reliable automation from that.

    McKinsey describes data as the central barrier to agentic AI: many companies are experimenting with agents, but only a small fraction scale them to the point where measurable value is created. At the same time, eight in ten companies cite data limitations as a barrier to scaling AI agents. Source: McKinsey: Building the foundations for agentic AI at scale

    This matches practical experience. With knowledge bots, email assistants, or process agents, the challenge is rarely just the model. It's about questions like: Which source is authoritative? Who maintains the data? What happens with conflicting information? How does the agent recognise outdated documents? Which information can it use for which user group?

    Without these foundations, a trust problem emerges quickly. The agent may give a good answer ten times in a row. But if the eleventh answer is based on an outdated document, users start to doubt the entire system.

    3. Business processes are rarely agent-ready

    Many processes are not cleanly documented. They work because experienced employees know what is meant.

    An example: a request comes in. Officially there is a process. In reality, someone first looks through old emails, briefly asks a colleague, checks an Excel spreadsheet, makes an exception for certain customers, and then manually enters something into the system.

    For people, that is everyday life. For an agent, it is chaos.

    AI agents do not necessarily need perfect processes. But they do need a clear first use case. The mistake in many projects is trying to automate too large a process from the start.

    Better to: choose a narrow slice, define clear inputs, set a measurable outcome, and deliberately decide where a human still reviews.

    Not: "The agent should automate our customer service." But rather: "The agent should identify recurring technical requests from emails, suggest relevant internal knowledge sources, and draft a reply that is reviewed before sending."

    4. Security and permissions become more complex

    A person has a user account. They are allowed to see certain files, open certain applications, and perform certain actions.

    An AI agent also needs access. But this is where it gets more complicated. Because the agent acts on behalf of a person, a team, or a process. Companies therefore need to clarify: What identity does the agent have? Does it access systems with user permissions or its own? Can it only read, or also write? Is it allowed to send emails, close tickets, or modify master data? How is what the agent has done logged? What happens if the agent performs the wrong action?

    Gartner expects a significant share of agentic AI projects to be discontinued by end of 2027, partly due to unclear business value, rising costs, and insufficient risk controls. This figure is a forecast, not an observed outcome – but it illustrates how important a controlled and economically sound starting point is. Source: Gartner: Agentic AI projects prediction

    With agents especially, security is not just an IT topic. It is part of the product design. An agent should not have more permissions than it truly needs. Particularly critical actions should initially require human approval.

    5. Prompt injection and tool access change the risk profile

    A normal chatbot can give wrong answers. An agent can take wrong actions.

    That is a significant difference.

    When an agent reads external content – such as emails, web pages, PDFs, or tickets – it can be manipulated through so-called prompt injection attacks. These involve instructions embedded in a document or message designed to cause the agent to deviate from its actual task.

    The OWASP Top 10 for LLM Applications shows that risks such as prompt injection, insecure output handling, and sensitive information disclosure are not merely theoretical. With agents that have tool access, these risks become especially critical – because a wrong answer can turn into a wrong action. Source: OWASP Top 10 for LLM Applications

    For agents with tool access, this becomes a practical problem. A manipulated document could attempt to cause the agent to expose confidential information, summarise data incorrectly, or trigger unintended actions.

    This is why it is not enough to "prompt the agent well". Companies need technical safeguards: permission restrictions, approval steps, logging, monitoring, secure tool interfaces, and clear limits on autonomous actions.

    6. Regulation makes implementation more demanding

    In Europe, the EU AI Act plays an additional important role. It entered into force on 1 August 2024 and is being phased in gradually. Initial obligations around AI literacy and bans on certain AI practices have applied since February 2025. Requirements for general-purpose AI models and governance rules came into effect in August 2025. Many central provisions become relevant from August 2026, while individual transition periods and the full roll-out extend into 2027. Source: European Commission: AI Act | EU AI Office: Timeline

    For companies, this means: AI agents must not only work technically. They also need to be properly classified under the law.

    This becomes particularly relevant when agents process personal data, prepare decisions, work with employee or customer data, or are deployed in regulated sectors.

    This is especially relevant for German companies: according to Bitkom, legal hurdles and uncertainty as well as a lack of technical expertise each rank among the biggest barriers to AI adoption, cited by 53 percent of respondents. A shortage of personnel follows at 51 percent. Source: Bitkom Research: Künstliche Intelligenz 2025

    This does not mean companies should wait until everything is perfect. But they should not plan AI projects completely bypassing data protection, IT security, and compliance. These topics belong at the table from the very beginning.

    7. Employees need to trust the agent

    An agent can work perfectly technically and still fail.

    Why? Because nobody uses it.

    This often happens when employees don't understand what the agent can do, where its limits lie, and why it should take work off their plate. Or when they feel a system is being introduced over their heads.

    With AI agents especially, trust is critical. Users need to know: What does the agent do automatically? What is only suggested? What data does it use? Where can I correct it? Who is responsible if something goes wrong?

    When these questions remain unanswered, resistance builds quickly. Sometimes openly, sometimes quietly. The agent gets formally rolled out but is bypassed in daily practice.

    This is why AI adoption is not a purely technical task. It is also a leadership task, a communication task, and a process task.

    8. Many projects start too big

    A typical pattern: the company sees an impressive demo and immediately thinks in very large terms.

    The agent should improve customer service, sales, procurement, internal knowledge search, and reporting – all at once. Ideally with access to all systems. And of course secure, GDPR-compliant, explainable, and productive from day one.

    That is understandable, but dangerous.

    The larger the scope, the more data sources, permissions, edge cases, process variants, and risks are involved. This not only increases development effort. It also increases the likelihood that the project gets bogged down in alignment discussions, approvals, and debates about first principles.

    Better to define a tightly scoped MVP. A good first agent should solve a real task, but not immediately rebuild the entire organisation. It should be tested early with real users, but with limited risk. And it should be measurable: Does it save time? Does it reduce follow-up questions? Does it improve answer quality? Does it speed up a specific process?

    9. Build or buy is not a matter of principle

    Many companies are asking themselves: should we build in-house or buy a ready-made solution?

    The honest answer is: it depends.

    Standardised requirements can often be addressed well with existing tools – for example Microsoft 365, Copilot extensions, Power Platform, specialised SaaS solutions, or established automation platforms.

    Individual processes, however, frequently require custom integration. Especially when internal data sources, special role models, proprietary workflows, or specific compliance requirements are involved.

    The key is not to build everything yourself out of principle. Equally, don't blindly buy a tool that ultimately doesn't fit your existing system landscape.

    The best approach is often hybrid: leverage existing infrastructure, use standard components, and only build custom where differentiation or process proximity is truly decisive. For many mid-market companies, Microsoft 365 is a solid starting point. Teams, Outlook, and SharePoint are often already in place – and that is precisely where much of the information and workflows reside that can be meaningfully supported with AI.

    What successful companies do differently

    Companies that successfully integrate AI agents usually don't start with the question: "What agent can we build?"

    They start with: "Which concrete process is important enough, clear enough, and data-mature enough that automation is worthwhile?"

    Then they scope the use case small enough for an MVP. They clarify data access, permissions, and responsibilities early. They build in control mechanisms. They test with real users. And they expand the agent only once the first version is generating stable value.

    That sounds less exciting than the hype. But that is exactly where the difference lies.

    AI agents are not a plug-and-play miracle. They are a new automation layer within the organisation. For that layer to work, data, processes, systems, security, and people need to fit together.

    Conclusion

    AI agents can be enormously helpful for companies. But only when they are not treated as an isolated AI toy.

    The real work is not in the sleek chat window. It lies beneath: in the interfaces, data sources, permissions, processes, security rules, and in user trust.

    Those who take these foundations seriously have a good chance of making an agent into more than a demo. Those who skip them quickly end up in the next pilot project – technically interesting, but with little real-world impact.

    Key takeaways

    • AI agents rarely fail because the AI is not intelligent enough – the biggest obstacles lie in integration, data quality, permissions, governance, and adoption
    • Existing enterprise systems are often not built for autonomous agents – APIs, data structures, and process logic need to be clean enough to support them
    • Data quality is critical: outdated documents, scattered knowledge, and unclear responsibilities quickly lead to poor results and loss of trust
    • Security must be built in from the start – clear permissions, audit logging, approval steps, and protection against risks like prompt injection
    • Regulation is not an afterthought – EU AI Act, GDPR, and internal compliance should be considered from the very first MVP
    • The best entry point is a tightly scoped MVP: one concrete process, one measurable goal, and a controlled rollout with real users

    Want to move your AI initiative from pilot to production?

    Book a free introductory call