Softomate Solutions logoSoftomate Solutions logo
I'm looking for:
Recently viewed
We Cut a London Recruitment Agency's CV Screening Time From 6 Hours to 25 Minutes Using AI — Softomate Solutions blog

AI AUTOMATION

We Cut a London Recruitment Agency's CV Screening Time From 6 Hours to 25 Minutes Using AI

8 May 202613 min readBy Softomate Solutions

The Problem: 150 CVs, Six Hours, One Consultant

The agency in this case study specialises in placing mid-senior marketing professionals across London and the South East. They process an average of 150 applications per vacancy and manage 12 to 18 active vacancies at any given time. Before implementing AI screening, each vacancy required one senior consultant to spend six hours on initial CV review before producing a shortlist of 10 to 15 candidates for client presentation.

Six hours multiplied by 15 active vacancies is 90 hours of senior consultant time per month spent on initial screening. At a fully loaded cost of £45 per hour for senior consultants, that represented £4,050 per month in screening cost alone, before any commercial activity, candidate management, or client relationship work.

What did the AI screening system achieve? The AI screening system reduced initial CV screening time from six hours to 25 minutes per vacancy. Senior consultants now spend 25 minutes reviewing an AI-generated shortlist with scoring rationales rather than six hours reading individual CVs. The shortlist quality, measured by client shortlist-to-interview conversion rate, improved from 68% to 79%. Monthly screening cost dropped from £4,050 to £338. The payback period on the build was four months.

The Brief: What the Agency Needed

The agency came to Softomate Solutions with a specific problem and a clear constraint. The problem: the CV screening process was consuming too much senior consultant time and limiting the number of vacancies each consultant could manage simultaneously. The constraint: any solution must comply fully with UK employment law and the Equality Act 2010, preserve the agency's reputation for high-quality shortlists, and not require the agency to hire technical staff to maintain it.

They had tried two off-the-shelf AI screening tools before engaging us. The first produced shortlists that consultants did not trust because the scoring rationale was opaque. The second had poor UK GDPR documentation and was rejected by their data protection officer. The solution needed to be technically sound, legally compliant, and transparent enough that consultants could explain to clients and candidates how the screening worked.

The Design: How We Built the System

Step 1: Define the Scoring Framework

Before any technical work began, we ran a two-hour workshop with the agency's three senior consultants and their managing director. The workshop produced a scoring rubric for their most common vacancy type: marketing manager roles in London.

The rubric covered eight criteria: years of directly relevant marketing experience (weighted 25%), specific channel expertise matching the vacancy (weighted 20%), industry background (weighted 15%), seniority progression over career (weighted 15%), measurable results listed in CV (weighted 10%), educational qualification or professional body membership where role-relevant (weighted 10%), and any red flags such as unexplained gaps exceeding 12 months or short tenure patterns (weighted 5% as a penalty).

Each criterion had a defined three-point scale: strong, adequate, and weak. The rubric was documented clearly enough that a non-technical consultant could apply it manually. This transparency requirement later proved important for candidate queries.

Step 2: Build the Extraction and Scoring Pipeline

CVs arrive in the agency's system in PDF and Word formats. The pipeline converted each CV to plain text, passed it to Claude via API with the scoring rubric as the system prompt, and received back a structured JSON response containing a score for each criterion, a one-sentence rationale for each score, and an overall ranking score out of 100.

The pipeline processed 150 CVs in approximately 18 minutes. Processing cost per CV at Claude's API pricing was approximately £0.04. Total processing cost for a 150-CV vacancy was £6. This replaced six hours of senior consultant time costing £270.

Step 3: Build the Shortlist Interface

The consultants needed to review AI outputs without switching between systems. We built a simple web interface that displayed the ranked shortlist, each candidate's score breakdown by criterion, the one-sentence rationale per criterion, and a link to the original CV. Consultants could adjust rankings, add notes, mark candidates as progressed or rejected, and export the final shortlist to the agency's CRM in one click.

The interface required no technical training. All three senior consultants were using it independently within two hours of the handover session.

The Compliance Approach

The agency's data protection officer reviewed the system design before launch. Three compliance requirements shaped the final design.

First, the system could not be the sole basis for rejection. All rejections required a consultant to confirm them in the interface, creating a documented human decision for every outcome. This satisfied the UK GDPR Article 22 requirement for human review of automated decisions.

Second, the candidate privacy notice on the agency's website was updated to disclose that AI processing is used in initial screening, what data is processed, and how candidates can request a human review of an automated decision.

Third, the scoring criteria were reviewed against the Equality Act 2010 protected characteristics. The criterion on educational qualifications was narrowed to apply only to roles where specific qualifications were a genuine occupational requirement. The initial draft had included educational institution as a factor, which was removed on legal advice as a potential proxy for socioeconomic status.

What Worked and What Did Not

What Worked

The speed and consistency improvements exceeded expectations. Consultants processing 150 CVs previously reported significant cognitive fatigue by CV 80, leading to less careful reading of the later CVs. The AI system applied identical analytical effort to CV 1 and CV 150. Consultant review of the AI shortlist, by contrast, required sustained attention for 25 minutes rather than six hours of variable-quality attention.

The scoring transparency built consultant trust faster than expected. Because every score came with a one-sentence rationale, consultants could see exactly why the AI ranked each candidate where it did. When they disagreed with a ranking, they could identify which criterion the AI had scored differently than they would and either accept the AI's assessment or override it with a documented reason. Within four weeks, the override rate dropped from 22% to 9% as consultants calibrated their trust in specific criteria.

The shortlist quality improvement surprised even us. The client shortlist-to-interview conversion rate improving from 68% to 79% was not a projected benefit. It emerged from two factors: the AI consistently identified candidates who scored well on measurable results even when those candidates were not in senior roles (a pattern consultants tended to undervalue manually), and the AI was less susceptible to halo effects from prestigious employer names that sometimes inflated manual assessments.

What Did Not Work

The initial prompt for the experience criterion was too broad. It scored years of experience without adequately distinguishing between generalist marketing experience and specialist experience in the required channel. The first two weeks of production use revealed that candidates with 10 years of broad marketing experience were scoring higher than candidates with four years of specialist paid media experience for a paid media manager role. We tightened the criterion definition in week three and the problem resolved.

The system struggled with non-standard CV formats. Candidates who had built their CVs as infographic-style documents or in table-heavy formats lost significant information in the text extraction step. We added a manual review flag for CVs where the extracted text was significantly shorter than the page count suggested, which prompted a consultant to review the original PDF directly for those candidates.

The Results: Six Months After Launch

Six months after launch, the agency had processed 89 vacancies through the AI screening system, covering 11,400 candidate applications. Senior consultant time per vacancy dropped from six hours to an average of 28 minutes (slightly above the target of 25 minutes due to the manual flag process for non-standard CVs). The agency increased the number of vacancies each consultant managed simultaneously from eight to 13 without adding headcount.

Revenue per consultant increased by 34% as a result of the capacity increase. The agency attributed one additional placement per consultant per month to the freed capacity, at an average placement fee of £6,800. The AI system that cost £18,500 to build generated measurable additional revenue of approximately £24,000 in month six alone.

The Ongoing Maintenance Programme After Launch

The six months following launch revealed that AI systems in a live recruitment environment require more active maintenance than a traditional software deployment. Three types of maintenance consumed the most time and attention.

Prompt refinement was the most frequent maintenance activity. As the system processed more applications, the consultants identified specific scenarios where the AI scoring did not match their professional judgement. In each case, we investigated whether the rubric criterion needed clarifying, the scoring scale needed adjusting, or the prompt instruction needed strengthening. Over the six months, we made 11 prompt refinements, each taking 30 to 90 minutes to implement and test. This is normal for a live AI system and should be budgeted for in any AI deployment plan.

Integration monitoring required a daily check for the first eight weeks, reducing to a weekly check thereafter. The most common issue was API rate limiting during high-volume periods, which caused the processing queue to slow. We implemented a queue management system in week six that spread processing across off-peak hours, eliminating the rate limiting issue.

Rubric updates were needed when the agency took on a new specialism (financial services marketing roles) that the original marketing rubric did not score appropriately. We built a rubric extension for financial services roles in three days, adding sector-specific criteria without disrupting the base rubric. The modular rubric design we had used from the start made this extension straightforward.

Advice for UK Recruitment Agencies Considering AI Screening

Based on this implementation and subsequent projects, three pieces of advice stand out for UK recruitment agencies evaluating AI screening.

First, do the rubric work before any technology decisions. The scoring rubric defines what the AI is looking for and therefore determines the quality of the shortlist. Agencies that invest two to three hours in rubric design before touching any technology produce better outcomes than agencies that configure a tool first and try to adapt it to their standards afterwards. Your rubric is your intellectual property. Protect it.

Second, build compliance into the design, not as an afterthought. The GDPR privacy notice update, the human review requirement, and the bias audit process all shape the technical design of the system. Designing first and then attempting to make the design compliant is harder and more expensive than designing with compliance requirements as functional specifications from the start.

Third, measure shortlist quality, not just screening speed. Speed improvement is the visible and immediate benefit of AI screening. Shortlist quality improvement is the commercially significant benefit. Track your client shortlist-to-interview conversion rate and your placement conversion rate from shortlist before and after implementing AI screening. If quality improves, you have a system that is building your commercial reputation. If quality declines, you have a system that is saving time while undermining your core value proposition.

Key Statistics on AI in UK Recruitment

The Recruitment and Employment Confederation's 2025 Technology Report found that UK recruitment agencies using AI-assisted screening process 3.2 times more candidates per consultant per month than those using manual screening, without a measurable reduction in placement quality. (REC, 2025)

According to LinkedIn's UK Talent Solutions Report 2025, hiring managers who receive AI-generated candidate summaries alongside CVs make shortlisting decisions 58% faster and report higher confidence in those decisions than those reviewing CVs alone. (LinkedIn, 2025)

A 2025 survey by the CIPD found that UK candidates who receive transparent communication about AI use in screening report a 23% higher satisfaction score with the recruitment process than those who receive no disclosure, even when the outcome (rejection) is the same. (CIPD, 2025)

Frequently Asked Questions

How much did it cost to build this AI screening system?

The build cost was £18,500 covering system design, API integration, the consultant interface, compliance review, and a three-month warranty period. Ongoing running costs are approximately £180 per month covering API costs at current volume and hosting. The payback period based on time saving alone was four months. Including the revenue uplift from increased consultant capacity, the payback period was under two months.

Can this system be used for any type of recruitment role?

The system can be adapted for any role type by changing the scoring rubric. The rubric used in this case study was specific to marketing roles. A system for engineering roles would weight technical skills and portfolio evidence differently. A system for sales roles would weight commercial track record and specific sector experience differently. The technical infrastructure is the same. The rubric defines the screening logic and changes with each vacancy type.

How did candidates respond to AI screening?

Candidate feedback was neutral to positive. The updated privacy notice generated five candidate queries in the first six months, all of which were resolved with a brief explanation of the process and an offer to provide the scoring breakdown for their application. None of the five requested a full manual review override. The agency reported no negative social media or review platform commentary attributable to the AI screening disclosure.

What would you do differently if building this system again?

We would spend more time on CV format standardisation before launch. The non-standard CV problem affected approximately 8% of applications and required manual intervention for each one. A more robust text extraction step that flags format issues and attempts multiple extraction methods before escalating would reduce this to under 2%. We would also build the rubric refinement process into the first four-week sprint rather than discovering the need for it through production use.

Conclusion

AI CV screening delivers measurable, significant improvements in recruitment efficiency when implemented with a well-defined scoring rubric, a transparent consultant interface, and a robust compliance framework. The technology is not complex. The design and the compliance work are where the quality of the outcome is determined.

The 6-hour to 25-minute improvement in this case study was not the result of breakthrough AI capability. It was the result of careful rubric design, honest piloting, and a willingness to fix the problems that emerged in the first two weeks rather than defending the original design.

Every UK recruitment agency processing more than 50 applications per vacancy is losing senior consultant time to a task that AI handles more accurately and consistently than a human under the cognitive load of reading 150 CVs in a row. The technology to fix this is available, affordable, and compliant with UK employment law when implemented correctly. The barrier is almost always uncertainty about where to start, not the capability of the technology.

Start with your most common vacancy type. Build the rubric for that vacancy type first. Pilot on the next five vacancies of that type before expanding. The evidence from your own operation is more persuasive than any case study, including this one.

If you want a custom AI recruitment screening system built for your agency or in-house HR team, see our AI automation services and how we approach compliance-first automation design for UK businesses.

Let us help

Need help applying this in your business?

Talk to our London-based team about how we can build the AI software, automation, or bespoke development tailored to your needs.

Deen Dayal Yadav, founder of Softomate Solutions

Deen Dayal Yadav

Online

Hi there 👋

How can I help you?