So following our recent post on proposed new regulations it’s not just the EU which is looking askance at the potential risks of artificial intelligence in recruitment. From across the pond comes news that the US Department of Justice has warned employers to take steps to ensure that the use of AI in recruitment does not disadvantage disabled job candidates, or else face the pain of breaching the Americans with Disabilities Act. The ADA already requires US employers to make the equivalent of the UK’s reasonable adjustments to allow disabled candidates to take part fairly in the recruitment process. However, both the ADA and the Equality Act were conceived well before the widespread use of AI in recruitment. Consequently there is concern that automated decision-making originally designed to reduce the scope for subjectivity and bias may actually create new disadvantage for candidates with disabilities, usually by screening out individuals who, by reason of their medical conditions, do not match the “ideals” which the algorithm is looking for.
By way of example only, a candidate whose disability limits his manual dexterity or visual acuity may have difficulties in completing a screen/keyboard-based test or application or in handling any required interactions with a chatbot, especially under time-pressure. He would therefore be put at a disadvantage by the AI even though the role being applied for might not include material keyboard use or alternative technologies could be provided if he got the job which would get him round the issue. An algorithm looking for suspicious gaps in CVs might successfully weed out those who have spent material time at Her Majesty’s Pleasure, but possibly also those who have had to take past career breaks on medical grounds but are otherwise perfectly suited to the role being filled. Similarly, it will be unlawful under the ADA and potentially the Equality Act also for the AI to screen out someone who would have scored highly enough to be taken further forward in the process had reasonable adjustments been made. That is the case even if that candidate would not ultimately have got the job on other grounds. Video interviewing software that analyses candidates’ speech patterns, facial expression or eye contact could easily also have a disproportionate impact on employees with certain disabilities.
These are exactly the sort of concerns which underpin the EU’s thinking around the risks of the unfettered use of AI in recruitment and hence the need for the proposed controls referred to in our blog. It is therefore to be expected that an employer taking its AI system off to the proposed national approval body for its airworthiness certificate will wish to demonstrate so far as it can that it has addressed this problem. That will mean showing either that its algorithm has been trained not to pick and choose on the basis of prospectively unlawful factors of this sort, or as a minimum that there are parallel safeguards to prevent any adverse impact. That might mean, for example, a separate recruitment process for individuals reasonably concerned that their medical condition may lead them to under-perform against the system’s expectations, perhaps involving oral or physical interviews instead or extended time limits for the completion of tests or the discounting or modification of disability-affected scores against criteria of only peripheral relevance to the job being hired for.
However, the guidance issued by the US Equal Employment Opportunities Commission earlier this month is clear that the making of a reasonable accommodation will not include “lowering production or performance standards” or abandoning necessary parts of a role simply to make it more accessible. From that it would seem to follow that there is no obligation on the employer to make its AI screening process less fussy across the board, not least because that undermines most of the point of having it in the first place. Instead the focus will need to be increasingly on taking steps to minimise the resulting risk of disadvantage by a tweaking of the algorithm’s programming or, as above, a separate recruitment channel which does not necessarily permit lower standards but does allow disabled candidates the best chance of showing that they meet them.
Employer buyers of AI recruitment tools are therefore encouraged to seek early contractual reassurance from manufacturers and sellers that their systems have been designed to sidestep these potential problems. In the end, however, it is unlikely that the employer will be able to pass any liability for discrimination back to the seller. That is because it is not strictly the operation of an AI system which may discriminate in this way which is unlawful, but how the employer then uses the output from it and what arrangements it makes to prevent any in-built bias in the AI’s programming from causing actual disadvantage to disabled candidates.