Softomate Solutions logoSoftomate Solutions logo
I'm looking for:
Recently viewed
AI Chatbot vs Human Support: Cost and Customer Satisfaction Data From 50 UK Companies — Softomate Solutions blog

AI AUTOMATION

AI Chatbot vs Human Support: Cost and Customer Satisfaction Data From 50 UK Companies

8 May 20266 min readBy Deen Dayal Yadav (DD)

AI chatbots resolve straightforward customer queries faster and at 60% to 80% lower cost per interaction than human agents. Human agents outperform AI on complex queries, complaints, and emotionally charged interactions by a significant margin. The decision is not which to choose but how to split the work between them. This guide presents the cost and satisfaction data from UK businesses and gives you the framework for making that split correctly for your operation.

The Cost Comparison: What the UK Data Shows

The average fully loaded cost of a UK customer service agent handling a support query is Β£8 to Β£14 per interaction, depending on the channel (phone costs more than email, which costs more than chat), the seniority of the agent, and the complexity of the query. This figure includes salary, employer NI, benefits, workspace, management overhead, and training cost, divided by the number of queries handled per working day. (Deloitte UK Contact Centre Research, 2025.)

The average cost of an AI chatbot resolving a query varies by platform and volume. For a well-implemented chatbot handling 1,000 queries per month: Β£0.80 to Β£2.50 per resolved interaction, including platform licence, LLM API usage, and amortised build cost over 24 months. At 2,000 queries per month, the per-interaction cost falls further as fixed costs spread across higher volume.

Cost per interaction by channel, UK 2025 benchmarks:

  • Human agent, phone: Β£11 to Β£18
  • Human agent, email: Β£8 to Β£13
  • Human agent, live chat: Β£7 to Β£11
  • AI chatbot, straightforward query: Β£0.80 to Β£2.50
  • AI chatbot escalating to human: Β£4 to Β£8 (shared cost of both interactions)

The Customer Satisfaction Comparison

CSAT data from 50 UK businesses that deployed AI chatbots alongside human support teams between 2023 and 2025 shows the following pattern. (Compiled from client data, industry reports, and Zendesk UK Benchmark Report 2025.)

For straightforward informational queries (order status, product information, policy questions, booking confirmations): AI chatbot CSAT averaged 81%. Human agent CSAT for the same query types averaged 84%. The difference is 3 percentage points. This is within normal variance and indicates that AI performs comparably to humans for these query types.

For complex queries (multi-step problems, account disputes, technical troubleshooting): AI chatbot CSAT averaged 61%. Human agent CSAT for the same types averaged 88%. The difference is 27 percentage points. This is a significant gap and explains why escalation architecture matters as much as the chatbot itself.

For complaint handling: AI chatbot CSAT averaged 44%. Human agent CSAT averaged 86%. The 42-point gap reflects the fundamental unsuitability of AI for interactions where emotional acknowledgement, empathy, and discretionary resolution decisions are the core requirements.

What the Data Means for Your Support Strategy

The cost data and satisfaction data together point to the same conclusion: AI chatbots should handle the high-volume, straightforward query categories where they perform comparably to humans at a fraction of the cost. Human agents should handle the complex, emotional, and high-stakes interactions where the satisfaction gap between AI and human is too large to accept.

The optimal split for most UK consumer businesses is 60% to 70% AI automation for informational and transactional queries, 30% to 40% human handling for complex, complaint, and high-value interactions. B2B businesses with fewer, higher-value customer relationships should apply a more conservative split: 40% to 55% AI for straightforward queries, 45% to 60% human for relationship-critical interactions.

The Hidden Cost That Skews Most ROI Calculations

Most AI chatbot ROI calculations compare the cost of the chatbot against the cost of human agents for the queries the chatbot handles. They miss three costs that affect the real number.

First: the cost of incorrect AI resolutions. When an AI chatbot gives a wrong answer, the customer returns to the support channel, often angrier than before. The second interaction costs more than the first. For every 100 AI-resolved queries at 95% accuracy, five lead to escalations that cost more than the original human interaction would have. Factor a correction cost of 1.5 times the standard query cost for your estimated error rate.

Second: the knowledge base maintenance cost. An AI chatbot trained on documentation from six months ago will produce incorrect answers for anything that has changed since then. Maintaining the knowledge base is ongoing, non-trivial work. Budget eight to twelve hours per month for a business with a moderately complex product or service offering.

Third: the escalation handling cost. Human agents handling escalations from the AI chatbot need more time per interaction than agents handling first-contact queries, because they must review the AI conversation history, understand what the customer was told, and manage the customer's frustration at having to repeat themselves. Escalation handling time runs 20% to 35% longer than equivalent first-contact handling time.

The Verdict: How to Split Your Support Operation

Run this exercise before deciding on your AI automation scope. Pull 200 recent support queries. Categorise each as: straightforward informational (clear answer exists, consistent response appropriate), transactional action (lookup required, standard process applies), complex investigation (multiple systems, non-standard resolution), or complaint or emotional (empathy and discretionary authority required). The proportion in the first two categories is your safe AI automation scope. The proportion in the last two stays with human agents.

Frequently Asked Questions

Do customers prefer AI chatbots or human agents?

Customers prefer whichever resolves their query correctly and quickly. For simple, informational queries, customers are increasingly indifferent to whether the responder is human or AI when the response is accurate and fast. For complex issues and complaints, customers consistently prefer human interaction. The preference is not for a channel type but for a successful outcome.

What is the average automation rate for UK business chatbots?

Across UK businesses in 2025, production AI chatbots achieving an automation rate of 60% to 75% are considered well-performing for a mixed-query support operation. Chatbots handling a narrow, well-defined query scope (only order status queries for an e-commerce business) achieve 80% to 90%. Chatbots handling a broad, complex query mix rarely exceed 65% without significant knowledge base investment.

How do I measure whether my AI chatbot is performing well?

Track four metrics monthly: automation rate (queries resolved without escalation), CSAT for AI-handled queries (separately from human-handled), escalation rate by query category (identifies knowledge base gaps), and re-contact rate (percentage of customers who contact support again within 48 hours of an AI interaction, indicating the AI resolution was insufficient). A rising re-contact rate is an early warning sign that accuracy is declining.

To see how we design customer support automation systems that balance AI efficiency with human satisfaction performance, visit our Customer Support Automation service.

Let us help

Need help applying this in your business?

Talk to our London-based team about how we can build the AI software, automation, or bespoke development tailored to your needs.

Deen Dayal Yadav, founder of Softomate Solutions

Deen Dayal Yadav

Online

Hi there Γ°ΕΈβ€˜β€Ή

How can I help you?