PeopleAMP
All case studies

Case study · HR tech · Live

SpecMatch the AI that gets you shortlisted for UK public-sector roles

An AI-driven platform that scores a candidate against every essential and desirable criterion on a UK public-sector person specification — and writes STAR-structured supporting statements in the exact voice, English, and character budget those forms demand.

Sector
HR tech · UK public sector
Shape
Freemium SaaS, web
Status
Live in production
Domains
NHS · Civil Service · LA · Edu
01

The 3-hour problem

Anyone who’s applied for a role with the NHS, the UK Civil Service, a local authority, a police force or a university knows the pain. Read the person specification. Map your experience against every essential and desirable criterion. Rewrite your supporting statement from scratch. Paste it into a form that truncates at a character count nobody warned you about.

Three to four hours per application. And if you miss a single essential criterion — even by accident — the application is binned at sift, usually without feedback. You never find out why.

The consumer AI tools get this hopelessly wrong. They write in American English (“organize,” “realize” — instant red flag on a UK form). They ignore character limits. They don’t know what an NHS Band 6, a Civil Service HEO, or a police-officer core competency actually requires. And they have no concept of the UK public sector’s STAR behaviour-example format, which drives shortlisting decisions.

SpecMatch was built to close that gap — and it’s deeply, obsessively domain-specific in a way a general-purpose tool simply cannot replicate.

02

Our approach

Step one wasn’t code. It was calibration.

Before writing a feature spec we spent hundreds of hours inside real person-specifications across NHS, Civil Service, local authority, police, fire, armed forces, and higher education. The patterns are there if you look hard enough — essential criteria always phrased a particular way, desirable criteria always in a particular place, behaviours scored against named competency frameworks (Civil Service behaviours, NHS Leadership Model, NHS 6Cs, local-authority core competencies), character budgets that vary by employer.

We encoded those primitives into the platform as first-class citizens — not afterthoughts. The STAR structure isn’t a prompt trick; it’s how the generator reasons. The character budgeting isn’t a post-hoc trim; it’s a constraint the model is asked to satisfy exactly. UK English, ATS-safe formatting, and employer-specific framing are non-negotiable.

03

What we built

Application tools

  • A gap-analysis enginethat scores the applicant’s profile against every essential and desirable criterion on the person specification, with a clear readout of what’s covered, partially covered, or missing.
  • An AI supporting-statement generatorthat addresses every criterion and writes STAR-structured behaviour examples at the exact character counts each employer demands (Civil Service behaviours, for instance, are typically 250–500 words — the platform hits them on the nose, every time).
  • NHS 6Cs alignment scoringand Civil-Service-grade calibration so outputs are framed in the employer’s own competency language — not translated from a generic CV.
  • A role-specific CV generator, keyword analyzer that cross-references the person specification, and an employment-gap explainer for applicants with non-linear career histories.

Interview preparation

  • An interview predictor that reads the person specification and surfaces the questions most likely to come up — broken down by behaviour, technical, and values-based categories.
  • A STAR builderfor structured evidence development and a practice-scoring engine that rates answers against the role’s own success indicators.

Outputs & accessibility

  • ATS-safe DOCX and PDF export with the formatting applicant tracking systems expect — no tables, no text boxes, no invisible fonts breaking the parse.
  • UK English throughout, enforced end-to-end — from prompts to output to document export.
  • A free tier for acquisition (gap analysis and a supporting statement, no credit card) so applicants can prove the value on their own application before they pay.
04

Under the hood

  • Domain-calibrated AI generation tuned per public-sector employer profile (NHS bands, Civil Service grades, local-authority competencies) rather than a one-size-fits-all prompt.
  • A criterion-parsing layer that turns a raw person specification into structured essential/desirable slots the generator can score against and fill.
  • A character-budgeting constraint system so STAR answers and supporting statements always sit inside the exact word and character limits each employer enforces.
  • An ATS-safe document-export engine producing DOCX and PDF outputs that round-trip cleanly through public-sector applicant tracking systems.
  • A freemium account tier so the platform acquires qualified users without paywalling the moment of truth.
05

Why this one mattered

SpecMatch is a case study in why general-purpose AI loses on category-specific problems. Every detail that a lazy build would skip — UK English, exact character counts, NHS 6Cs, behaviour frameworks — is exactly the detail that decides whether an applicant gets shortlisted or binned. Skip any of them and the product has no right to exist.

We ship products like this because we actually care about the last 10%. The edge cases. The domain quirks. The employer conventions nobody documents anywhere. It’s slower to build well. It’s also the entire reason people pay for a specialist tool instead of firing up a chatbot.

If your product lives or dies on domain depth — whether that’s regulatory, clinical, financial, educational, or sector-specific — this is the shape of partner you want.

If this is the level you need

Your next product could live here.

Book a 30-min call. We’ll scope what shipping something this serious would look like for you — timeline, price, pay-on-results — before you commit to anything.