Is it judgement day for situational judgement tests?

Aug 14, 2024 | Home Featured, Opinion, Selection & assessment

Despite recent criticism, situational judgement tests have the power to build a diverse and inclusive workforce, explain Krithika Barcham and Emily Goldsack at Talogy.

As talent strategies and technology advances, talent acquisition teams and assessment leads are continuously exploring whether to take an ‘out with the old, in with the new’ approach in their assessment strategies.

While change and adaptation is certainly prudent to ensuring effective measurement of the criteria that matter to your organisation, doing away with tried and tested methodology purely for the sake of change is counterintuitive.

To that end, many have a bone to pick with Situational Judgement Tests (SJTs). While they gained prominence over nearly a century as an employment tool (McDaniel, et al., 2001), they have been getting a lot of stick recently.

There are concerns that SJTs don’t cater to neurodiversity, due to the hypothetical nature of typical question formats, or that they can be circumvented through AI-misuse. However, there is a need to examine the evidence behind these claims, as there is currently little indication in the literature that SJTs lead to adverse impact when designed correctly.

SJTs can (and should be) engaging, informative, and insightful, giving candidates an inkling of what to expect once in-role. The efficacy and fairness of these assessments hinge significantly on design and implementation. We must objectively assess SJTs’ rigour and quality, using this as the basis for assessing suitability for use in our respective assessment processes.

Well-designed SJTs with inclusive input

A well-designed SJT starts with the involvement of diverse subject matter experts (SMEs) from the outset.

This ensures that the assessment as a whole reflects a diverse range of perspectives and experiences, accurately mirroring real-life situations that candidates might encounter, enhancing the relevance and fairness of the assessment.

Inclusive input adds true value, reflecting the experiences of diverse people who perceive situations differently. Additionally, different perspectives informing the design helps to ensure response options and the scoring that underpins the tool reflects diverse experiences.

Finally, and importantly, through having diverse SMEs involved in design, you can understand the lived experiences of people and their preferences around what they would like and need from an assessment experience.

This allows for comprehensive instructions that talk to a variety of needs and allows for creation of alternative assessment options, offered as an option for participants who disclose reasonable adjustments.

Being transparent on the options available to participants disclosing adjustments enables individuals to choose the option that will enable them to perform at their best.

Accessibility at the forefront

Another key factor is accessibility when designing SJTs. Achieving full compliance with the Web Content Accessibility Guidelines (WCAG) is ideal, however even SJTs that do not meet the WCAG standards still offer important advantages over other forms of assessments.

Specifically, SJTs are untimed which means that they do not need to be taken under time pressure, where speed and accuracy are considered important factors in traditional tests. This is especially crucial for anxious candidates or those with reasonable adjustments that may affect their response time. (Rabe et al., 2024).

By removing time constraints, candidates have a chance to demonstrate their abilities without any undue stress. This allows for a fairer assessment experience, while also providing clarity on the nature of the role.

Popularity and candidate perceptions

Recruiters tend to gravitate towards SJTs because they provide two-way information; the company learns about the candidate and how they would respond to a range of relevant scenarios, and the candidate learns about the nature of the company they are applying into and the skills they would need to leverage.

It provides a realistic job preview, and applicant reactions, particularly to multimedia SJTs, are generally positive (Bardach et al., 2021). Particularly in the area of early talent, SJTs can act as a differentiator, providing immersion and making the experience and company as a whole feel memorable.

Other reasons for the continued popularity of SJTs are that they address job-related competencies that otherwise cannot be as easily measured through traditional multiple-choice tests. They have useful levels of criterion-related and construct validity, have incremental validity over cognitive ability measures, and can be presented in a variety of media formats.

Additionally, SJTs have demonstrated lower reported subgroup differences in comparison to other test formats (Whetzel, Sullivan & McCloy, 2020).
That said, in recent years, there have been growing concerns about whether SJTs may unfairly disadvantage neurodiverse groups specifically.

We conducted our own analysis on one of Talogy’s bespoke SJTs, drawing on a large data set of over 450,000 candidates who had completed the assessment.

We looked at different categories of neurodiversity and found no evidence of meaningful differences in performance compared to those who were not neurodiverse. Although steps were taken in the design of this test to ensure fairness, it provides strong evidence that SJTs may not disadvantage neurodiverse groups (Shalfrooshan et al., 2023).

Nonetheless, taking steps to ensure that all candidates feel less apprehensive about SJTs is critical to ensure they perform optimally and minimise subgroup differences.

Addressing concerns beforehand by clearly communicating the purpose and benefits of SJTs, providing practice opportunities, and adding immersive elements that allow candidates to connect with the company, can help candidates from all groups feel more at ease. This approach can also reduce the negative impact of test anxiety on performance.

The age of AI

To enhance resistance to artificial intelligence, complex design and scoring methodology is key; SJTs mimic the nuance of a role, in that while there is a right and wrong, it’s rarely ever as straightforward in role.

When designing SJTs, Talogy employs a range of response options, presentation strategies, and back-end scoring methodologies to help make them less susceptible to AI.

Research shows that the rate format (which Talogy most often employs) tends to outperform other formats such as rank-order or most/least formats, when it comes to internal consistency reliability, test–retest reliability, incremental validity over cognitive ability, group differences, respondent reactions, and examinee completion time (Ployhart & Ehrhart, 2003).

Additionally, whilst AI may offer additional information to a candidate, it may not necessarily aid them in achieving a pass mark.

While there are certainly some concerns about the use of AI in recruitment processes, particularly in application forms and personal statements, assessments such as SJTs are not being deployed at these stages. Considering the use of SJTs and other robust psychometrics in place of these stages would allow for more manipulation-proof assessment at initial stages of the recruitment journey.

We will always advocate for the integration of strong psychometrics to supplement traditional application materials, to deliver a comprehensive and fair evaluation of candidates.

Fairness

Whilst some tools claim to be resistant to AI manipulation, these claims often overlook the importance of accessibility. A well-designed SJT can be more accessible and fairer than assessments that rely heavily on speed of response or other potentially exclusionary criteria.

Research shows that although SJTs exhibit group differences, they are lower in magnitude than those exhibited by cognitive ability measures, making SJTs an important predictor of performance (Whetzel, Sullivan & McCloy, 2020). Additionally, multimedia SJTs can show even lower subgroup differences and could be a further way to enhance fairness (Bardach et al., 2021).

Commitment to quality and ethical standards

At the heart of our approach is a commitment to quality and ethical standards. Talogy’s focus is on taking a research-driven, data-led approach, and considering all the facts, innovating in a prudent way to deliver optimised outcomes in relation to reliability, prediction, and DEI.

We prioritise the wellbeing and success of our candidates, taking pride in being a safe pair of hands, and a trusted and transparent partner to our clients.

This vision and ambition guides us in developing assessments that meet the needs of our client organisations and candidates, with a steadfast focus on integrity and practicality.

We absolutely do not see the current landscape and the emergence of AI as the death knell for SJTs. If anything, it provides us with the ability to keep innovating and upgrading our tools.

This, compounded with our dedication to creating inclusive, accessible, and scientifically-grounded assessments, such as our suite of SJTs, reflects our commitment to both our clients and candidates in the age of change and AI to come.

You may also be interested in…

4 ways to handle candidate use of AI in assessment and selection

Red flags in early careers recruitment that you shouldn’t ignore

Navigating AI disruption in selection processes

Was this article helpful?
YesNo

0 Comments

Share This