The upside to data limitations in student recruitment

Feb 13, 2020 | Selection & assessment

Content provided by: Amberjack

In the era of AI and deep learning, Amberjack explores the upside to data limitations in student recruitment.

Whilst we live in an area of technological possibility, from a future talent perspective, organisations are still struggling with their key data foundations.

Until our data lakes are full of the right data and that data is properly scrubbed and organised, the application of deep learning and AI is just as likely to take us around in circles as take us forward.

Arguably this is a good thing for now as the pace of change in the industry means that even the largest, most capable and best resourced teams are struggling to keep up; it buys us all just a little bit of time to think about the implications of the brave new world that it seems inevitable we will enter.

On the one hand it seems strange that we would worry more about imperfections in robot-run assessment process than we currently worry about the imperfections in human-run processes: our inherent human biases mean that, at best, the processes of today can aspire to be as un-unfair as possible.

On the other hand, it is perhaps only natural that a conscious bias is less tolerable than an unconscious bias and, whilst machines can now learn, the foundations for that machine learning need to be consciously set.

There is also a realistic limit on what can be done to address human biases juxtaposed against a hope that the same may not be true of machine bias: if we can optimise the set-up of algorithms and continually improve them, maybe we can achieve a utopian bias-free selection process? It’s a high stakes situation: AI for assessment, if applied well, could be a true force for good, but if applied badly, could be disastrous at a societal level.

So, what are the moral and ethical boundaries in the application of AI for assessment? Who should set them and how do we police them? How do we ensure that we continually improve from the best possible starting point?

Whilst even the most progressive organisations still only give themselves a 4 out of 10 for data readiness (Amberjack Future Focus, 14 September 2019), we should use the temporary reprieve to set standards that ensure that the Fifth Industrial Revolution results in the future talent processes of dreams not nightmares.

Future talent specialists clearly aren’t the only stakeholders in the debate about the application of AI for assessment. They are, however, arguably at the forefront of that debate. The nature of future talent programmes (high volume, fewer hiring variables, strategic sponsorship) means that they are usually the most obvious place to start the implementation of new assessment technologies aimed at driving efficiency and effectiveness.

As a result, as future talent specialists, Amberjack has a deeply vested interest in helping to set augmented assessment up for success. Therefore, whilst our clients wrestle with their data and work to lay the best possible data foundations, we will be wrestling with the principles of fairness, reliability, transparency, privacy/security and accountability as they relate to the application of AI to assessment.
Along with many of the pioneers in the application of AI for assessment, as well as leading thinkers from more broadly across the AI/Data Science community, we will be forming an Advisory and Ethics board to offer support and best practice advice and guidance for employers. Whilst it is difficult to confidently define governance boundaries whist AI technology is still evolving, we will work to create and evolve consensus on societal principles/values and best practices to maximise the chances of AI adding as much value in assessment for selection as it already is in accessibility.

To find out more call Amberjack on 01635 584130 or email hello@weareamberjack.com

Was this article helpful?
YesNo

0 Comments

Share This