DialOnce

Why do some AI agent projects in customer service succeed better than others?

Updated on 12/03/2026

Over the past few years, artificial intelligence has been transforming customer service in depth. Chatbots, voicebots, AI agents, automated responses, and conversation analytics are now widely deployed as companies invest heavily to improve user experience and optimize operational costs. Yet a large number of AI agent projects in customer service still struggle to deliver the expected results. Many fail to produce visible impact or clearly improve customer satisfaction. In most cases, the issue is not the technology itself. These projects require time, continuous adjustments, and strong coordination between teams. So how can companies improve the effectiveness of deploying AI agents in customer service?

Why are companies launching AI agent projects in customer service today?

Before understanding why some projects fail to reach their objectives, it is important to recall what is currently driving organizations to explore them.

Customer service must now be immediate, seamless, and always accessible, as customers expect fast, personalized, and consistent responses regardless of the channel used (web, phone, messaging, or apps). At the same time, teams must handle growing volumes of requests while maintaining a high level of service quality.

This is why many companies have turned to AI to help simplify customer interactions and save time for their teams on a daily basis.

AI agents can respond instantly to simple requests and reduce waiting times. They direct users more quickly and automate information retrieval for common questions, complaint tracking, appointment scheduling, or contract management. They also allow customer advisors to focus on more complex situations while ensuring continuity between digital and human channels.

The objective is not to replace human advisors, but to make access to information smoother and better organized. AI therefore fits into an omnichannel approach, supporting contact centers and the tools already deployed within the organization.

When properly designed, AI agent integration projects can lead to higher customer satisfaction, reduced handling times, lower request volumes managed by teams, and improved reachability. The return on investment (ROI) is based on optimizing key KPIs such as first contact resolution, operational cost savings, and stronger customer retention. According to the DialOnce & Kiamo Barometer (2023–2025), the customer satisfaction rate when interacting with an AI agent increased from 29% to 31% between 2023 and 2025, showing a gradual improvement in user perception.

Deploying an AI agent also helps companies prepare for the future. Technologies evolve quickly, as do customer expectations. These tools can progressively improve, learn from interactions, and support digital transformation over the long term.

This is why many companies are launching such initiatives. It also explains why success depends less on the technology itself than on how the project is designed from the very beginning.

Why do some AI agent projects in customer service succeed better than others?

1. A vision that is too technology-driven and not focused enough on real use cases

In many organizations, everything starts with the technology. A tool, platform, or chatbot is selected first, and only afterward do teams begin thinking about customer journeys, friction points, or business priorities. The order is simply reversed.

The project is then treated as a technical topic, almost like a traditional IT deployment. A solution is installed, configured, and launched. But conversational AI is not just software. It represents a change in how companies respond to customers.

Very quickly, a gap can appear. The selected use cases are not always the right ones, the scenarios do not necessarily reflect the most common requests, and the tool may seem complex or of limited value. On the team side, adoption becomes challenging.

The project is also often managed in silos. It may be led by IT or innovation teams, sometimes with digital departments involved, but without real co-creation with customer service, marketing, or operational teams. Each group moves forward with its own perspective, and in the end the AI agent does not always fully reflect real conversations or business priorities.

In these conditions, even a technically solid solution can miss its initial objective. Teams may start to perceive it as just another tool to integrate into their daily work. Meanwhile, customers may encounter journeys that still need improvement and that do not always fully address their needs.

A high-performing AI agent always starts from the field. It builds on what customers actually experience, the everyday friction points, and the work carried out by advisors. Technology then supports real needs rather than defining them.

For this to truly work, the project must first be approached as a business initiative, not simply as a technical one. It starts with listening to the teams who interact with customers, analyzing the most frequent requests, and identifying the moments when an AI agent can genuinely simplify the journey.

It is also essential to bring together all relevant stakeholders from the start. Customer service, marketing, digital, IT, and operational teams should co-design the use cases. This approach avoids siloed decisions and helps create an agent that addresses real needs.

A strong project then progresses step by step. Teams test on a defined scope, observe how the solution is used, adjust it, and gradually expand. This approach helps secure the deployment, involve teams, and continuously improve the experience.

Finally, success often depends on clearly explaining what the AI agent is for, what it changes in practice, and how it supports teams in their daily work. When the AI agent is perceived as support rather than a constraint, adoption becomes much more natural.

 

2. The “big bang” deployment trap

Trying to automate everything at once is a common temptation. On paper, it creates the impression of moving quickly and rapidly transforming customer service. In practice, this type of deployment can lead to costly adjustments and make adoption more gradual than expected.

When everything is launched at the same time, it becomes difficult to identify what is actually working and what needs to be adjusted. Teams are faced with a tool they have not yet had time to fully adopt, while users encounter journeys that are still being refined. As a result, trust may take longer to develop.

The projects that perform best take a different path. They move forward progressively, starting with a clear and useful scope, such as handling simple requests or frequently asked questions. This allows teams to test in real conditions, observe how the solution is used, and identify areas for improvement.

The AI agent is then refined over time based on feedback. Features evolve, scenarios become more precise, and the experience becomes smoother before being extended to additional use cases. This approach reduces risks, secures the deployment, and encourages stronger team adoption.

 

3. The absence of a structured project brief

In many projects, things move quickly. A solution is selected, discussions begin with vendors, and potential use cases are imagined, but without clearly documenting what the company actually needs. This is often where difficulties begin.

Without a clear framework, decisions are made along the way. Features accumulate, priorities shift, and the budget can quickly exceed the initial estimates. As a result, the AI agent may not fully meet expectations simply because the needs were not clearly defined from the start.

In the market, many companies offer AI agent solutions with fairly similar capabilities in terms of features, performance, and customization. Without clear reference points, it becomes more difficult to compare them and identify the most suitable partner.

A well-structured project brief helps align stakeholders around the same objectives and avoid misunderstandings between business teams, IT, and management. It also makes it easier to evaluate solutions using concrete and comparable criteria, beyond marketing promises.

This stage is also when certain constraints must be anticipated, such as integration with existing information systems, data management, regulatory compliance, and data security in line with GDPR and the AI Act. The earlier these aspects are addressed, the smoother the project can progress.

A project brief is not just an administrative document. It is a steering tool that helps keep the project on track, supports consistent decision-making, and prevents the initiative from losing focus along the way.

This is why we created a guide to support companies step by step in drafting a project brief tailored to an AI agent. It provides a simple, customizable tool that can be applied to all types of projects to maximize the chances of success.

This white paper is based on insights, feedback, and interviews conducted by the teams at Sia, DialOnce, and their partners.

DialOnce x Sia white paper on how to draft an RFP for a customer service AI agent project

4. Poor anticipation of costs and ROI

The financial dimension is often addressed too late in AI projects. The focus tends to be on launching the initiative, selecting the solution to deploy, and defining use cases, while underestimating what the project will actually cost over time.

The initial development is only part of the expenses. Once the agent is in production, recurring costs take over. Maintenance, updates, supervision, data management, and usage-based pricing for AI models can quickly increase spending if they are not anticipated.

Without a clear vision from the start, it becomes harder to control the budget and fully demonstrate the results of the project. Not because the project fails to create value, but because that value was not clearly defined and measured from the outset.

Many organizations launch an agent without precisely defining what they expect in return. Performance indicators are introduced later, often too late to effectively steer the project. Yet a few simple metrics can help assess the real impact. The ability to resolve a request at the first contact, the share of interactions that still require a human advisor, and the evolution of customer satisfaction quickly provide a clear view of performance.

When these elements are anticipated from the beginning, the project becomes more robust. It becomes measurable, easier to manage, and easier to justify internally. AI is no longer perceived as an additional cost, but as an investment whose impact can be clearly tracked.

DialOnce’s AI agent natively integrates a measurement tool that allows teams to monitor performance over time with precision. For the clients we support, these indicators make it possible to quickly objectify results and anchor the project in a continuous improvement approach, with a clear view of the value created and the optimization priorities to focus on. In particular, we have observed that, on average, deploying an AI agent makes it possible to achieve:

Customer satisfaction +25% Satisfied or very satisfied users with the automated journeys
Contacts
avoided
30% Rate of contacts avoided through automation
Average Handle Time -20% Thanks to automated response and fast routing of requests
Improved reachability +30Pts Rate of calls handled (via phone or digital) out of the total incoming calls

Concrete results and satisfied customers

"To facilitate the work of our advisors, we chose to offer DialOnce's conversational agent supported by artificial intelligence. This agent was particularly in demand during the Paris 2024 Olympic Games. It sorts incoming requests, and responds to a maximum number of travelers in French and English, reducing contacts by up to 30%."

Gaëtan Bultez

Customer Service Director, RATP

5. A break in the omnichannel experience

An AI agent cannot operate as an isolated tool. When it does not communicate with the rest of the ecosystem such as the CRM, ERP, or request management tools, it quickly creates breaks in the customer journey. Users have to repeat their issue, switch channels without continuity, and almost start from scratch. The experience then becomes frustrating.

In practice, this type of disruption can quickly hinder adoption. Customers do not distinguish between the AI, the website, the contact center, or the application, they simply expect continuity between channels and the ability to move their request forward regardless of the entry point.

A truly effective project considers omnichannel integration from the outset. The AI agent can then integrate with existing tools, retrieve the context of previous interactions, and transfer it when a human advisor takes over. The transition becomes natural, without loss of information or unnecessary repetition.

When this continuity is well designed, AI does not replace the human relationship. It extends it. It filters, prepares, and guides interactions, then hands over at the right moment. The advisor arrives with a clear view of the situation, and the customer experiences a smooth journey from start to finish.

 

6. Insufficient or poorly structured data

Data quality is essential for the success of an artificial intelligence project. An AI agent may be technically powerful yet still deliver average responses simply because it relies on information that is unreliable or outdated.

In many organizations, knowledge bases remain incomplete, content evolves regularly without always being updated in a structured way, and conversation histories are still underused. User intentions can remain unclear and use cases poorly documented. As a result, the agent learns from inaccurate or incomplete data.

The impact quickly becomes visible for customers. Responses may become generic, sometimes imprecise, and consistency across interactions can decrease. Users may feel that the AI does not truly understand their request or that it repeats information that is not particularly helpful.

To avoid this, data management must be treated as a core topic rather than a secondary one. It begins with reviewing existing content, identifying what is genuinely useful for teams, and clarifying the information the AI needs to respond accurately.

Projects also become more effective when data is maintained over time. Content evolves, products change, and customer journeys are updated. Without regular updates, even the best AI will eventually fall behind. Conversely, a well-maintained knowledge base gradually improves the relevance of responses and strengthens trust among both teams and users.

AI performance rarely depends solely on the technology itself. It depends above all on the quality of the data it learns from and on how the organization maintains and manages that knowledge over time.

This is also why the concept of trusted AI is increasingly discussed. Human supervision remains present, decisions become explainable, and performance is monitored over time using concrete indicators. The AI no longer operates alone; it is observed, evaluated, and continuously improved.

This approach is notably supported by structured evaluation mechanisms such as the principle of LLM as a judge. In this model, an AI system is used to analyze the quality of generated responses, assess their relevance, detect inconsistencies, and identify situations that require improvement. This systematic evaluation helps maintain a consistent level of quality without relying solely on occasional analyses or field feedback.

Teams therefore retain control over the system. They can understand how the AI responds, intervene when necessary, refine scenarios, and prioritize improvements. Performance no longer depends only on the project launch, but on its ability to continuously evolve over time.

How can you increase the chances of success for a customer service AI project?

If AI projects that fail are often discussed, real-world experience also shows the opposite. When a project is properly supported, structured, and managed over time, the results follow and the initiative reaches its objectives.

 

Long-term support

Beyond the best practices mentioned earlier, one of the key success factors is the quality of ongoing support. Organizations that succeed do not deploy their AI agents alone. They rely on a partner that plays an active role over time, with operational follow-up, regular checkpoints, and a co-creation approach. This is the case with DialOnce, which designs sovereign chatbot and AI agent solutions and builds on more than ten years of expertise in customer service. Work is carried out hand in hand with teams, following the rhythm of real usage and business priorities.

Deploying an AI agent does not end with the production launch. A large part of the value is created after deployment, when teams begin observing real usage. In the most mature projects, this support relies on a dedicated Customer Success Management (CSM) approach. The Customer Success Manager acts as a long-term contact who follows the project over time. Their role is to help teams understand how the agent is used, analyze available data, and identify optimization opportunities. Their objective is to support organizations throughout the entire lifecycle of the project in order to facilitate adoption and maximize the value created.

This support is notably reflected through regular steering committees, often referred to as “steerco” meetings. These are follow-up sessions organized between business teams, technical teams, and project leaders.

During these steering committees, several topics are typically discussed:

  • analysis of the AI agent’s real usage
  • monitoring of performance indicators
  • identification of friction points within customer journeys
  • prioritization of improvements to be implemented

These discussions help maintain a clear view of the project’s performance and prevent the AI agent from remaining static after its launch. Instead, the system gradually evolves based on usage, team feedback, and business needs.

 

Tools to manage and improve the AI agent

To improve an AI agent over time, teams also need tools that allow them to understand what is actually happening across customer journeys.

A management console generally plays this role as a supervision tool. It allows teams to monitor scenarios, analyze interactions, and adjust customer journeys based on observed usage.

Teams can, for example, modify a scenario, test an update in a secure environment, analyze usage data, or publish adjustments into production.

This approach makes it possible to evolve journeys in a progressive and controlled way, without disrupting the customer experience.

Some solutions also include a variable manager. This feature allows the experience to be personalized based on contextual data such as language, brand, channel used, or other journey-specific information. It enables the AI agent’s responses to adapt in real time depending on the user’s situation. The experience therefore becomes more consistent, smoother, and more relevant across all touchpoints.

The console also provides access to detailed reports that help track the performance of customer journeys. Teams can analyze the share of requests resolved autonomously, the situations that still require advisor intervention, the most frequently used journeys, and the friction points within the customer experience.

These insights make it possible to quickly identify priority optimizations and progressively improve the overall performance of the system.

 
 

Monitoring AI with dedicated indicators

Another key success factor lies in the ability to monitor how the AI operates.

Some platforms therefore provide trusted AI indicators. These indicators make it possible to track the quality of the responses produced by the AI agent while ensuring transparency and security in interactions.

They offer visibility across several important dimensions of the project:

  • the autonomous resolution rate of requests
  • the evolution of customer satisfaction
  • situations that require human intervention
  • potential discrepancies in the responses produced

Some platforms also integrate system self-feedback mechanisms. The AI can analyze its own responses and flag situations where the quality of interactions could be improved.

These insights allow teams to better understand how the AI behaves in real-life situations and to quickly identify friction points within customer journeys.

Quality control features can also be implemented to ensure that AI responses remain aligned with user expectations, business rules, and the company’s compliance requirements.

Teams therefore have access to detailed reports that provide full visibility into how the agent operates: journey performance, response quality, and situations that require adjustments.

The AI agent is therefore not left to operate on its own. It is observed, measured, and continuously adjusted, which gradually improves its relevance, secures interactions, and maintains a consistent level of quality in the customer experience.

AI agent at 1001 Vies Habitat orchestrating contact journeys to improve tenant relations.

Feedback - 1001 Vies Habitat

1001 Vies Habitat relied on DialOnce’s support to structure its contact journeys through an AI agent specifically designed for social housing providers, built to go beyond a simple chatbot. The project was developed in collaboration with the teams to identify the most frequent requests and design journeys tailored to tenants. Today, the AI agent helps pre-qualify requests, facilitate access to information, and direct tenants to the most appropriate solution. Advisors therefore receive requests that are already structured, improving both processing efficiency and service quality. As with many successful projects, the difference does not rely solely on the technology used, but also on the support provided and the way the project is managed over time.

If many customer service AI projects struggle to reach their objectives, it is not because of the technology itself. Most of the time, the challenge lies in how it is designed, integrated into the organization, and managed on a daily basis.

The projects that succeed rely on a broader vision. From the outset, they take into account customer experience, data quality, business challenges, governance, and operational performance, and they often rely on the support of experts such as DialOnce to structure and manage the approach over time.

Ultimately, AI is not just a technical project. It is a transformation project for customer service, one that affects teams, tools, and the way organizations work on a daily basis.

See our AI agents in action
Book a demo