Softomate Solutions logoSoftomate Solutions logo
I'm looking for:
Recently viewed
What AI Still Cannot Do: An Honest Assessment From a London Team That Has Built 50+ Integrations — Softomate Solutions blog

AI AUTOMATION

What AI Still Cannot Do: An Honest Assessment From a London Team That Has Built 50+ Integrations

8 May 202610 min readBy Softomate Solutions

After building and deploying more than 50 AI integrations for London businesses across professional services, financial services, legal, e-commerce, and healthcare, we have a clear and honest picture of where AI consistently delivers and where it consistently falls short. This assessment is not what most AI vendors will tell you. Vendors have an incentive to sell capability. We have an incentive to build systems that work in production and continue working six months after the client stops checking in on them. Those are different incentives, and they produce different accounts of what AI can do.

The businesses that extract the most value from AI are the ones that understand its real limitations before investing. They scope their AI projects around what AI genuinely handles well, design human oversight for the areas where it does not, and avoid the expensive mistake of deploying AI where it was never going to be reliable.

AI Cannot Generate Genuine Novelty

The most important limitation to understand is this: AI does not create new ideas. It recombines existing ones. Every output an LLM produces is a statistically sophisticated remix of patterns in its training data. For most business tasks, this is sufficient and genuinely useful. Drafting an email, summarising a document, answering a question about your product, writing code to a specification: all of these are recombination tasks that AI handles well precisely because the patterns required to do them are well-represented in training data.

Where this limitation becomes a genuine business constraint: creative strategy, genuinely novel product ideas, breakthrough research, and any task where the value is specifically in producing something that does not resemble what has been done before. An AI can generate 50 names for a new product. None of them will have the originality of a name that came from someone who understood the brand, the market, and the cultural moment in a way that was genuinely new. An AI can write a marketing strategy document that follows the conventions of marketing strategy documents very well. It cannot tell you something genuinely surprising about your market that no one else has noticed.

We have seen this constraint surface most clearly in creative agency work. Agencies that have used AI to produce first drafts of campaign concepts consistently report that the AI produces competent, derivative work. It is useful as a starting point and for accelerating the drafting phase. It is not useful as a substitute for the genuine creative insight that makes a campaign memorable rather than merely adequate.

AI Cannot Sustain Genuine Empathy

AI can simulate empathy effectively enough that users often cannot distinguish it from genuine empathy in short interactions. It can recognise emotional signals in text, respond with appropriately warm language, and adjust its tone based on the sentiment it detects. In a short customer support interaction, this simulation is often adequate for user satisfaction.

The limitation becomes visible in sustained, high-stakes interactions: a complaint call with an angry customer who needs to feel heard over multiple exchanges, a difficult sales conversation where the prospect is making an emotional as well as rational decision, a client relationship where trust has been damaged and needs rebuilding over months. In these situations, the absence of genuine emotional investment becomes apparent. The AI produces appropriately empathetic language without any of the actual care that language implies. Users sense this, even when they cannot articulate what feels off.

In our deployment experience, customer-facing AI performs well when the interaction is short, the resolution is clear, and the customer's emotional state is neutral. It performs significantly worse when the interaction is extended, the resolution requires negotiation, and the customer is frustrated or distressed. Businesses that automate the former and preserve human handling for the latter get good outcomes. Businesses that automate both consistently damage customer relationships in the high-stakes interactions.

AI Cannot Make Complex Ethical Judgements

AI systems can apply ethical rules that are specified explicitly in their system prompts. They can be told what to refuse, what to flag for human review, and what categories of decision require human sign-off. What they cannot do is reason through genuinely novel ethical situations that require weighing competing principles, understanding cultural and contextual nuance, and taking responsibility for the judgement made.

This matters most in regulated sectors. A financial services AI that flags a transaction as potentially suspicious is applying a rule. A compliance officer who reviews the flag and decides whether to file a Suspicious Activity Report is making a judgement: weighing the evidence, considering the client relationship, assessing the regulatory risk, and accepting professional accountability for the decision. The AI can assist the process. It cannot own the decision. UK regulated firms that deploy AI in compliance-adjacent workflows must maintain clear human decision points for any determination that carries professional or regulatory accountability.

We have consistently advised against AI deployment in any client workflow where the ethical complexity of edge cases could not be resolved by a clear rule, and where the cost of a wrong decision was significant. This includes credit decisioning with unusual applicant profiles, complaints resolution involving discretionary compensation, and any client interaction where the right answer requires weighing factors that a rule cannot capture.

AI Cannot Manage Long-Horizon Autonomous Plans Reliably

AI agents are powerful for multi-step workflows when the number of steps is bounded and the intermediate states are verifiable. The reliability of agentic systems degrades as the number of sequential steps increases, because errors compound. An agent that is 96% accurate at each step is 96% accurate at step one, 92% accurate by step two, 85% by step three, and 66% accurate by step six. A 12-step autonomous workflow with 96% per-step accuracy has a 60% probability of completing correctly from start to finish.

This is why every agentic deployment we have built for production use has human checkpoints at defined intervals. Not because the AI cannot take actions, but because the cumulative error rate across many unsupervised steps creates outcomes that are technically completed but contextually wrong. The maintenance coordinator whose AI agent booked a contractor for the wrong property, because a data mismatch in step three of a ten-step workflow propagated through the remaining steps, does not care that the AI completed the workflow. They care that the outcome was wrong.

The practical implication: design agentic workflows with natural breakpoints where a human reviews the intermediate state before the agent continues. This is not a limitation that will disappear as models improve in the near term. It is a structural characteristic of sequential probabilistic systems that businesses must design around.

AI Cannot Build or Maintain Genuine Relationships

Relationships between businesses and their clients, partners, and stakeholders are built on shared history, mutual accountability, trust developed through consistent behaviour over time, and the understanding that comes from genuinely knowing someone. AI can access a CRM record. It cannot know a client.

We have been asked several times whether AI can replace business development executives in B2B sales. The honest answer is that AI can handle the administrative work of business development: research, outreach drafting, follow-up scheduling, meeting preparation. It cannot replace the relationship work: building trust through consistent delivery, navigating a complex internal stakeholder environment at a prospect, managing a difficult negotiation where the relationship matters as much as the terms, or maintaining a long-term client relationship through the inevitable difficult periods.

Businesses that understand this use AI to give their relationship people more time for relationship work by removing the administrative burden. Businesses that mistake the administrative work for the actual value of business development deploy AI into the wrong layer and wonder why pipeline and retention do not improve.

AI Cannot Learn From a Single Example the Way a Human Can

A human expert shown one unusual example can immediately generalise from it, understand what it implies, and apply that understanding to future situations. An AI model requires many examples before it can generalise reliably. This is the data dependency that makes AI projects expensive when data is sparse and cheap when data is abundant.

For businesses with rich, structured historical data, this limitation is rarely a constraint. For businesses in specialist domains with small case volumes, genuinely unusual transaction types, or highly contextual decision requirements, the data dependency is a real constraint on what AI can reliably do. A hospital trust seeing 20 cases per year of a specific complex condition does not have enough data to train a reliable AI model for that condition. The 20 cases are enormously valuable to the clinicians who have managed them. To an AI training process, they are insufficient.

What This Means for How UK Businesses Should Approach AI

The businesses that extract the most value from AI use it where it is strong: high-volume, structured, data-rich tasks with clear success criteria and meaningful error tolerance. They design human oversight for the tasks where AI is weak: novel situations, emotional complexity, ethical judgement, long autonomous chains, and relationship management. They treat AI as a highly capable tool with specific limitations rather than as a general intelligence that can be applied to any problem.

This is not a pessimistic view of AI. It is an accurate one. The realistic version of AI capability is already transformative for most businesses. Overstating the capability leads to poor scoping, failed projects, and damaged confidence in AI investment that was otherwise sound.

Frequently Asked Questions About AI Limitations

Will AI limitations disappear as models improve?

Some will narrow significantly. The gap between AI and human performance on structured reasoning tasks is closing. The gap on genuine creativity, sustained empathy, and ethical judgement in novel situations is narrowing more slowly, because these capabilities require something beyond pattern recognition that current model architectures do not fully address. Plan your AI investments around what AI can do reliably today, with the expectation that the boundary will shift over the next three to five years, not that it will disappear.

How do you identify the right tasks for AI in a specific business?

Apply this filter to every candidate task: is the input reasonably consistent, do we have sufficient historical data, is there a clear definition of a correct output, and is the error tolerance high enough that a 2% to 5% failure rate is manageable with appropriate monitoring? Tasks that pass all four criteria are strong AI candidates. Tasks that fail on error tolerance or data availability require human handling or human-in-the-loop design until data and accuracy improve.

Does building AI capability in-house reduce the risk of hitting these limitations?

Building in-house gives you more control over how AI is deployed and allows you to design tightly around limitations. It does not change the underlying capabilities and limitations of the models. An in-house AI team still works with the same LLMs and ML frameworks that an external partner would use. The advantage of in-house capability is operational control and faster iteration, not expanded model capability.

To discuss how to scope an AI project around what AI reliably delivers for your specific business context, see our AI and Machine Learning Solutions service or our AI Projects page.

Let us help

Need help applying this in your business?

Talk to our London-based team about how we can build the AI software, automation, or bespoke development tailored to your needs.

Deen Dayal Yadav, founder of Softomate Solutions

Deen Dayal Yadav

Online

Hi there 👋

How can I help you?