Automated hiring tools promised to remove human bias from recruitment. In practice, a growing body of evidence suggests they may be replacing one form of discrimination with another — one that is harder to see, harder to challenge, and increasingly difficult to defend under UK law. For neurodivergent candidates, the rise of AI-powered CV screening and asynchronous video interview tools has introduced a new set of invisible barriers: rigid language pattern matching, tonality scoring, and keyword filtering that systematically disadvantage people whose communication styles fall outside a narrow statistical norm.
This is no longer a fringe concern. The Equality and Human Rights Commission (EHRC) has begun scrutinising whether automated hiring systems breach the Equality Act 2010 — specifically the duty to make reasonable adjustments and the prohibition on indirect discrimination. For any organisation using or procuring these tools, the compliance exposure is real and growing. For agencies advising clients on their hiring technology stack, the liability question is becoming unavoidable.
How AI Screening Tools Create Indirect Discrimination
Most AI screening tools are trained on historical hiring data — which means they learn to replicate the patterns of candidates who were previously hired. If those candidates were predominantly neurotypical, the model encodes neurotypical communication as the benchmark for 'good'. CV screening tools that rank candidates by linguistic coherence, sentence structure, or vocabulary range will systematically score lower those with dyslexia, ADHD, or autism — not because those candidates lack the skills for the role, but because their written self-presentation diverges from the training distribution. Asynchronous video interview platforms that assess 'confidence' or 'engagement' through facial expression and speech cadence apply similar logic with similarly discriminatory outcomes.
Under the Equality Act 2010, indirect discrimination occurs when a provision, criterion, or practice applies equally to all candidates but puts a group sharing a protected characteristic at a particular disadvantage — and cannot be justified as a proportionate means of achieving a legitimate aim. Neurodivergence falls under the protected characteristic of disability for many individuals. An AI tool that scores against communication patterns correlated with neurodivergent conditions almost certainly qualifies as such a practice. The fact that the discrimination is algorithmic rather than intentional provides no legal shield. What makes this especially problematic is that many employers deploying these tools have no visibility into how the scoring models actually work — they are, in effect, outsourcing a legally consequential decision to a black box.
The Reasonable Adjustments Problem
The Equality Act's reasonable adjustments duty adds another layer of complexity. Employers are legally required to take reasonable steps to remove substantial disadvantages faced by disabled candidates. In a human-led process, that might mean offering extra time, accepting handwritten rather than typed responses, or accommodating atypical interview formats. In an automated pipeline, the adjustment mechanism often either does not exist or is not triggered until after the AI has already made its filtering decision — meaning candidates who need adjustments are screened out before a human ever reviews their application.
Some platforms offer a nominal 'accessibility flag' at the start of an application, but this typically routes the candidate to a human reviewer only if they self-identify — placing the burden of disclosure on the candidate at the earliest and most vulnerable stage of the process. Many neurodivergent candidates, particularly those who have faced stigma in previous roles, choose not to disclose. The result is a system where the duty to adjust is technically acknowledged but structurally circumvented. Employers who believe they have met their obligations by offering this option may find, under scrutiny, that their process fails the proportionality test entirely.
What the EHRC Scrutiny Means in Practice
The EHRC's interest in this area reflects a broader international pattern. In the United States, the Equal Employment Opportunity Commission has already issued guidance on AI and disability discrimination, and litigation is beginning to establish precedents. The UK is following a similar trajectory. The EHRC has the power to conduct formal inquiries, issue compliance notices, and support individual claimants — and it has signalled clearly that automated decision-making in employment is within its remit. The Information Commissioner's Office adds a further dimension: under UK GDPR, individuals have rights in relation to solely automated decisions that produce significant effects, and a rejection at screening may well qualify.
For senior decision-makers, this creates a specific and urgent compliance question: do you know, in sufficient detail, how the AI tools in your hiring pipeline make their decisions? Can you demonstrate that those decisions do not disproportionately disadvantage candidates with protected characteristics? Do you have an audit trail that would satisfy an EHRC inquiry or an employment tribunal? Most organisations currently cannot answer yes to all three. The vendors selling these tools are rarely forthcoming about model architecture, training data, or documented bias testing — and 'we relied on the vendor's assurances' is not a legal defence.
Building a Defensible, Inclusive Hiring Stack
The answer is not to abandon automation in recruitment — AI tools offer genuine value in reducing recruiter workload and, when properly designed, can reduce certain forms of conscious bias. The answer is to procure and deploy them with the same rigour you would apply to any high-risk business system. That means requiring vendors to provide documented evidence of bias testing across neurodivergent populations specifically, not just headline diversity metrics. It means building human review checkpoints before any automated tool produces a rejection decision. It means separating skills-based assessments from communication-style proxies, and being explicit about what each element of your process is actually measuring.
Organisations should also conduct a structured audit of their existing hiring pipeline against the Equality Act's indirect discrimination and reasonable adjustments frameworks — ideally with legal counsel who understands both employment law and automated systems. This is not a theoretical exercise. Employment tribunal claims citing AI-mediated discrimination are becoming more common, and the disclosure requirements in such cases can be extensive and reputationally damaging. Proactive compliance work now is materially cheaper than reactive legal defence later.
For technology leaders and HR directors working together on hiring infrastructure, the key shift is recognising that algorithmic tools are not neutral infrastructure — they are policy decisions encoded in software. The organisations that will navigate this period well are those that treat AI in hiring with the same governance rigour they apply to financial controls or data protection: clear ownership, documented rationale, regular review, and genuine accountability when something goes wrong.
The neurodiversity compliance gap in AI hiring is not a distant risk. It is present in the tools many UK employers are using today, and the regulatory and legal frameworks to challenge it are already in place. The question for senior decision-makers is not whether this matters — it clearly does — but whether your organisation is in a position to demonstrate it has taken the issue seriously.
At iCentric, we work with organisations to build hiring and HR technology that is both effective and defensible — where automation serves the process without replacing the judgement that employment law requires. If your organisation is reviewing its hiring stack or advising clients on automated recruitment systems, we would welcome the conversation. The time to act is before a complaint lands, not after.
Get in touch today
Book a call at a time to suit you, or fill out our enquiry form or get in touch using the contact details below