What recruiters need to know about ChatGPT for assessment and selection

Jul 27, 2023 | Home Featured, Selection & assessment, Webinars

ChatGPT is disrupting assessment and selection processes. Robert Newry from Arctic Shores explains what recruiters need to know.

In the last few months recruiters’ view of ChatGPT has moved from a novelty item that could help hiring teams with productivity, to major concern on how it will undermine assessment and selection.

There are daily stories of application forms, online tests and even video assessments being aced by candidates and distorting the sifting process.

The key question then is what should talent acquisition managers make of all this hype around ChatGPT?

From a productivity perspective there will be some great enhancements and these will naturally filter through the tech. The big moral question comes in assessment – is the use of generative AI to be encouraged or outlawed?

Three fundamental points recruiters need to know

1. ChatGPT and generative AI works at lightening speed, predicts outcomes highly accurately and is self-learning. So any text based exercise whether it’s writing an application form or working out the answer to a reasoning-based question can be done in real time and as good as, if not better than, a human. It may not produce the perfect response the first time, but it can be ‘trained’, as prompt’s refine ChatGPT’s output to deliver more accurate responses.

2. There are new plug-ins all the time to improve its weaknesses. So for a while ChatGPT was hopeless at numerical reasoning – the Wolfram Alpha plug-in makes it super human.

3. ChatGPT 4 which is 100 times more powerful than ChatGPT 3 is behind a paywall. This means that candidates from a more privileged and wealthy background will have an advantage, potentially throwing diversity and inclusion into disarray.

Should we be fearful of generative AI?

Armed with this knowledge, many recruiters’ response is to be fearful of generative AI and protect current processes by deterring and detecting candidates using AI tools.

This is what the education sector did initially. Yet, the more educators tried to ‘deter and detect’ the more students found ways around the detection. Those who didn’t, complained that the detection tools weren’t accurate.

Furthermore, when your organisation is internally embracing the use of AI as a huge productivity improvement tool (and even your own recruiting team!), how can you say to candidates that they are ‘cheating’ if they use such tools to help with their application?

Embracing AI

Quickly recruiters, like educators, will realise that AI needs to be embraced not cast as a criminal like activity.

Companies and recruiters that recognise generative AI is changing the way we work, and therefore what skills they need to measure and hire for, will outshine those that ignore its impact or try to hold on to processes that prevent its use.

The question recruiters should be asking themselves and their assessment vendors is, how do we adapt our processes and future-proof them to ensure generative AI is seen as a positive tool?

If work tools can process information and reason more effectively, then new candidate roles will be required to think even more critically and creatively. Along with a need to learn quickly and for candidates to be more adaptable, we may also devote more focus on interpersonal skills such as empathy.

You can read more about how-to to make AI for recruitment a strength not a weakness.

Equally, recruiters and talent acquisition teams must ask critical questions of any assessment vendor making claims about AI and its impact on recruitment.

What questions should be asked?

Those who are specialists in recruitment are not AI experts, so knowing what questions to ask can be a challenge but here are some essential ones which are taken from the recent Open Standard for Responsible AI and the standards set by the British Psychological Society:

– What training data have you used and can you demonstrate against a representative general population group that there is no adverse impact to specific groups? The testing questions are critical.

– What published and public sources of material support the claims and underpin the model being used?

– Can the scored results be analysed and justified (ie no Blackbox use of correlations)?

– Which category does it fall into under the EU AI Act and how will it comply with the requirements?

We have an opportunity to re-assess what we measure and how we measure it, which means we should be leveraging generative AI, not smothering it.

Robert was on ISE’s panel of experts for the ‘ChatGPT and the impact on early careers recruitment’ webinar.


Was this article helpful?


Share This