1. A vision that is too technology-driven and not focused enough on real use cases
In many organizations, everything starts with the technology. A tool, platform, or chatbot is selected first, and only afterward do teams begin thinking about customer journeys, friction points, or business priorities. The order is simply reversed.
The project is then treated as a technical topic, almost like a traditional IT deployment. A solution is installed, configured, and launched. But conversational AI is not just software. It represents a change in how companies respond to customers.
Very quickly, a gap can appear. The selected use cases are not always the right ones, the scenarios do not necessarily reflect the most common requests, and the tool may seem complex or of limited value. On the team side, adoption becomes challenging.
The project is also often managed in silos. It may be led by IT or innovation teams, sometimes with digital departments involved, but without real co-creation with customer service, marketing, or operational teams. Each group moves forward with its own perspective, and in the end the AI agent does not always fully reflect real conversations or business priorities.
In these conditions, even a technically solid solution can miss its initial objective. Teams may start to perceive it as just another tool to integrate into their daily work. Meanwhile, customers may encounter journeys that still need improvement and that do not always fully address their needs.
A high-performing AI agent always starts from the field. It builds on what customers actually experience, the everyday friction points, and the work carried out by advisors. Technology then supports real needs rather than defining them.
For this to truly work, the project must first be approached as a business initiative, not simply as a technical one. It starts with listening to the teams who interact with customers, analyzing the most frequent requests, and identifying the moments when an AI agent can genuinely simplify the journey.
It is also essential to bring together all relevant stakeholders from the start. Customer service, marketing, digital, IT, and operational teams should co-design the use cases. This approach avoids siloed decisions and helps create an agent that addresses real needs.
A strong project then progresses step by step. Teams test on a defined scope, observe how the solution is used, adjust it, and gradually expand. This approach helps secure the deployment, involve teams, and continuously improve the experience.
Finally, success often depends on clearly explaining what the AI agent is for, what it changes in practice, and how it supports teams in their daily work. When the AI agent is perceived as support rather than a constraint, adoption becomes much more natural.
2. The “big bang” deployment trap
Trying to automate everything at once is a common temptation. On paper, it creates the impression of moving quickly and rapidly transforming customer service. In practice, this type of deployment can lead to costly adjustments and make adoption more gradual than expected.
When everything is launched at the same time, it becomes difficult to identify what is actually working and what needs to be adjusted. Teams are faced with a tool they have not yet had time to fully adopt, while users encounter journeys that are still being refined. As a result, trust may take longer to develop.
The projects that perform best take a different path. They move forward progressively, starting with a clear and useful scope, such as handling simple requests or frequently asked questions. This allows teams to test in real conditions, observe how the solution is used, and identify areas for improvement.
The AI agent is then refined over time based on feedback. Features evolve, scenarios become more precise, and the experience becomes smoother before being extended to additional use cases. This approach reduces risks, secures the deployment, and encourages stronger team adoption.
3. The absence of a structured project brief
In many projects, things move quickly. A solution is selected, discussions begin with vendors, and potential use cases are imagined, but without clearly documenting what the company actually needs. This is often where difficulties begin.
Without a clear framework, decisions are made along the way. Features accumulate, priorities shift, and the budget can quickly exceed the initial estimates. As a result, the AI agent may not fully meet expectations simply because the needs were not clearly defined from the start.
In the market, many companies offer AI agent solutions with fairly similar capabilities in terms of features, performance, and customization. Without clear reference points, it becomes more difficult to compare them and identify the most suitable partner.
A well-structured project brief helps align stakeholders around the same objectives and avoid misunderstandings between business teams, IT, and management. It also makes it easier to evaluate solutions using concrete and comparable criteria, beyond marketing promises.
This stage is also when certain constraints must be anticipated, such as integration with existing information systems, data management, regulatory compliance, and data security in line with GDPR and the AI Act. The earlier these aspects are addressed, the smoother the project can progress.