Recruitment AI and Hiring Bias

One of the biggest opportunities offered by recruitment AI is the potential to reduce or even eliminate human bias.

It’s an exciting and worthy goal, and one that has been eagerly seized upon by HR managers who are otherwise daunted by the prospect of attempting to reduce hiring bias through anti-bias training programs

But experience has shown that unless managed very carefully, recruitment AI will absorb and propagate human biases. The problem – as you would expect – lies in the data with which the algorithm has been taught.

Bias in, bias out

While human talent acquisition professionals can take multiple factors into account when making a hiring decision, AI and Machine Learning (ML) can only learn from the data it has been fed. If this data contains instances of human bias or if bias is unintentionally introduced, the ML model will ultimately be flawed.

Two examples of biased hiring AI illustrate this problem.

1. Amazon’s AI recruiting tool

The most famous example of biased AI to date comes from the team at Amazon, which discontinued an AI recruiting tool after it demonstrated it “did not like women”. The problem lay in the 10 years of recruitment data with which the algorithm had been trained.

This data set reflected the uncomfortable truth in tech companies: that men have been historically hired over women for many years (even today, women hold only 25% of IT roles in the U.S). Having no other context from which to draw, the AI was inadvertently taught by past data to prefer male over female candidates. 

2. Bavarian Broadcasting study

This fascinating study tested the bias of face-scanning recruitment AI. In theory, the AI tool is supposed to score video-based applicants in personality traits including openness, extraversion, and agreeableness. However, it was proven to be easily swayed by appearances.

Bavarian Broadcasting hired an actor to record several identical answers to the same question using near-identical tone of voice, facial expressions, and body language. However, the actor changed her appearance each time by wearing glasses, a headscarf, or positioning herself in front of a bookshelf.

The results skewed oddly depending on appearance. The AI found the actor less conscientious with glasses, more open and agreeable with a headscarf, and considerably less neurotic with a bookshelf behind her.

This example shows the importance of only feeding the ML model data that it makes sense for it to learn from. In this case, the AI was drawing odd conclusions such as “glasses = less conscientious”. In all likelihood, the model worked in development but fell apart when released to a wider data set.

Face-scanning AI has a long way to go

The rush to adapt face-scanning recruitment AI is premature. In November 2019, a rights group filed an official complaint with the Federal Trade Commission about HireVue’s use of “unproven” AI systems that scan people’s faces and voices to assess traits such as “willingness to learn” and “personal stability”. The tech has been labeled by experts as “profoundly disturbing”, “opaque”, “pseudoscience”, and “a license to discriminate”. 

In my view, face scanning in recruitment is a mistake because it abandons what should be AI’s essential advantage: its blindness to appearance, race, age, gender, and so on.

Removing and policing biased data

Unbiased AI is certainly possible with the careful management of the data it learns from and ongoing policing of data sets, even after the product is released.

McKinsey recommends tackling bias in AI through better data collection involving more cognizant sampling, using third parties to audit data and ML models, and improving transparency about processes and metrics to boost understandability and avoid a “black-box AI” situation

The key for reducing AI in hiring tools is to flip our usual understanding of more data = better results. In terms of hiring bias, AI quality is determined by what is removed from the data set, not what is put in.