What are the pros and cons of AI in recruitment? How can it be used to promote diversity and inclusion, and to improve the candidate experience? Nicola Thomas, lecturer in Work Psychology at Sheffield University, explores these questions.
After a lecture I gave last week, a student came up to ask me if I thought it was ethically ambiguous for a company to use AI in their recruitment process. “It just seems to me”, they said, “that a human should be making decisions that could change my whole future”.
With ChatGPT being the first platform in the world to reach 100 million users in just two months, this student highlights one question that is taking centre stage in the recruitment space: What is the role of AI in recruitment? Is it good, or bad? Should we be worried or relieved?
As with almost everything in life, unfortunately reality is nuanced and muddled. On the one hand, companies can experience savings in efficiencies, while at the same time creating a worse candidate experience even turning candidates off applying in the first place.
In this article I am going to explore the two sides to AI in the recruitment process, the yin and yang of efficiency, candidate experience, transparency and fairness, cheating, and the experience of recruiters.
The good: One of the biggest benefits of implementing AI in processes in general is the huge potential for time saving. Algorithms are particularly beneficial in doing ‘grunt work’, that is performing tasks that typically take a lot of time yet can require little skill.
In graduate recruitment, companies can receive thousands of applications, making candidate sifting time consuming. Let’s take Unilever as an example. Unilever utilises HireVue, an AI-powered video interview platform to conduct initial candidate video interviews. This has saved them time by replacing person-person telephone interviews.
The bad: Often, AI powered video interview platforms utilise different algorithms like facial expression analysis and voice tone analysis to assess candidates throughout their interview. Yet, this technology is not yet backed by evidence, and there are several concerns about its effectiveness and reliability.
One leading concern is that of bias and discrimination. Facial expression analysis algorithms may be trained on data that is not representative of the population, leading to biases and inaccuracies in the results.
Moreover, facial expressions may be influenced by factors such as cultural background, gender, and personal characteristics, which can result in inaccurate or discriminatory results.
A lack of clarity on the data algorithms were trained on can be a big problem, and it also raises questions about reliability. Do people who smile more perform better in graduate schemes? There is little data to support this. Some algorithms also analyse a candidates heart rate through their video data. Yet, this leads to other questions – are candidates with faster or slower heart rates better in their roles? What does this mean for candidates who experience anxiety?
The unclear: With increased efficiency on the one hand, yet the potential for increased bias and discrimination on the other, careful thought must be applied in practice.
While facial expression analysis in video interviews is an intriguing technology, it is not yet backed by evidence and raises concerns about bias, discrimination, and privacy. It should be used with caution and should not be relied upon as the sole means of evaluating job candidates. Instead, it should be used in conjunction with other evaluation methods, such as skills assessments and structured interviews.
The good: The good news for candidates is that increased personalisation in the recruitment process, like the use of Chatbots to update candidates where they are in the process, has been shown to give candidates a more positive experience throughout the recruitment process.
This can be particularly important for candidates who are applying for their first graduate role, where clear and regular communication can help them in dealing with potential rejections. You can read more about creating a positive candidate experience remotely.
The bad: On the other hand, if a recruitment process is fully automated with AI, candidates may feel that the process is impersonal and lacks human interaction. This can lead to a negative candidate experience, particularly if candidates have questions or concerns that cannot be addressed by a machine.
The unclear: While some evidence highlights the potential of AI to create better candidate experiences, it also highlights that this is dependent on the AI-powered tools being as human-like as possible. This means feedback to job seekers that is communicated in a human way, with the potential for candidates to speak to human recruiters when needed. You can read about the impact a lack of personal contact can have on the candidate experreince.
It is not yet clear how candidates will feel if they do not know how AI-tools are being used in recruitment, or how they will feel applying for multiple positions and being rejected by multiple algorithms.
Perceptions of fairness
The good: Recently, Unilever implemented an AI-powered recruitment tool, Pymetrics, to help improve the diversity of their candidate pool and reduce bias in the recruitment process.
Unilever found that the use of Pymetrics led to a significant increase in the diversity of its candidate pool. For example, the percentage of female applicants increased by 16% compared to the previous year, and the percentage of applicants from underrepresented ethnic groups increased by 4%.
The tool did not take into account factors such as a candidate’s name, educational background, or work experience, which can be sources of bias, hence enabling the company to reduce the sources of bias in the recruitment process.
The bad: In 2018, a global online retailer implemented an AI-powered recruitment tool to help screen and evaluate job candidates. The tool was designed to scan resumes and identify the most qualified candidates based on patterns in their work experience, education, and other factors. However, the tool was soon found to have a significant bias against female candidates, as it had been trained on historical hiring data that was predominantly male. This highlights how using AI in a recruitment can reduce fairness if the tool is not free from bias.
A recent study (Human versus artificial intelligence in personnel selection: The impact on perceived fairness and transparency in the International Journal of Selection and Assessment) also found that overall, candidates perceive AI to be less fair and transparent than human decision-makers in the recruitment process. It found that candidates that had been evaluated by AI felt less informed about the selection criteria and less confident in the fairness of the process than those evaluated by human assessors.
The unclear: Considering the case studies above, there is conflicting evidence about bias and fairness when using AI in the recruitment process. Can a well thought out recruitment process reduce bias? And support candidates in perceiving the process as fair? Potentially. Yet, this is not necessarily a given.
You may be interested to read about how Rare and Herbert Smith Freehills are tackling bias in recruitment.
Cheating and candidate honesty
The good: Algorithms have recently been used to pick up various forms of candidate cheating. Hirevue for example detected when candidates were wearing a hidden ear-piece to receive information from another person throughout their video interview.
AI-based tools can also effectively detect fake resumes and other forms of deception like falsified work experience or education with a high degree of accuracy.
The bad: It recently made the news that a recruitment team unknowingly recommended ChatGPT for a job interview after AI was used to complete a task in the recruitment process. It’s not just companies that have access to AI after all.
With increasingly sophisticated and mainstream AI tools like ChatGPT, candidates can use these tools to complete tasks with greater ease.
The unclear: It has always been possible for candidates to cheat throughout recruitment, either by lying on a CV, or by having a friend completing ability tests for them. Yet, employers are increasingly able to detect fraud, and candidates are increasingly able to commit it.
On balance, there are obvious benefits, and drawbacks when using AI in the recruitment process. It seems the key is to be aware of the potential drawbacks for companies when they are deciding to use AI-tools.
I would encourage employers to keep these questions in mind when approaching implementing AI into their recruitment processes: is there evidence that links this tool with better job performance (e.g. facial expression in video interviews with increased job performance), how will these tools impact our candidates, and how can we use these tools to increase our candidate experience?
You may also be interested in…