Updated: Jul 26, 2021
Validity, ethics and trust factors are all part of taking on AI-powered tools within HR. This mini-series looks at 3 areas HR practitioners who are selecting, implementing and applying artificial intelligence tools should have at least a basic understanding of.
In Part One, we looked at the prevalence of AI in the workplace & the ROI challenges facing organisations. We also introduced some basic descriptions of the different types of AI with relevant HR examples.
In Part Two, we now turn to tips on how to start assessing the validity and reliability within AI HR tools.
Are you responsible for selecting or implementing your HR AI-powered technology? If so, this article may assist you in forming questions to ask your technology vendors when assessing the validity and reliability of an HR framework when adapted into algorithmic AI form. This article is intended to be a basic introductory guide to several scientific measures in this context; therefore we'll also take a look at forming your 'dream team' to support you when reviewing a vendor's technology.
But firstly, as HR, why should we care?
As Tomas Chamorro-Premuzic (2016) of University College London puts it:
"The datification of talent is upon us, and the prospect of new technologies is exciting. But the digital revolution is just beginning to appear in practice, and the research lags our understanding of these technologies."
It appears that every week there is a new HR AI-powered kid on the block, as seen by the 2020 $5 billion VC investments in HR tech. But the pace of adoption vs research rigour is at times out of alignment. Some key points of concern shared by Chamorro-Premuzic and a more recent 2020 literature review in the European Journal of Work and Organizational Psychology which looked at recent developments and advances in AI digital selection procedures (DSP's):
There is a lack of validation meta-analysis when compared to the more traditional or offline delivery methods research base.
More validation studies are required to demonstrate how the transfer of measurement from conventional to digital techniques might affect validity.
A large portion of real-world HR practitioners are not sufficiently equipped to evaluate such tools' accuracy. Some organisations may be blindly using them without understanding their validity.
The 2020 review concluded that there are significant gaps in the Occupational Psychology evidence base to show that DSP's are effective. There are calls for a re-alignment between research and practice.
What can we as HR do to help bridge this gap?
One way is to fully understand the theoretical origins of HR processes, knowledgeably examine what underpins your new or the existing technology and then use that to fully comprehend how HR decisions are made within your organisation. Within this context you can begin to frame the TA/ HR challenges being experienced, leveraging this knowledge in discussion with your vendors to get the most out of the tool from both a Talent and business ROI.
So let's dive into some knowledge sharing:
From Organisational Psychology Theory to AI Adaptations
As the saying goes,
'If you don't know where you've come from, you don't know where you're going' (Maya Angelou).
The diagram below shows a quick view of the pathway from Organizational Psychology / HR Theory to AI Adaptations.
What do we mean by Reliability and Validity?
In simple terms, reliability tells us how precise a measurement has been made. Is it repeatable and consistent? For example, if I use a tape measure to see how tall I am & repeat the action over and over, it will keep giving me the same answer, therefore very reliable. Note, however that you must use the same methods under the same circumstances... If I stood on a box whilst measuring my height with the same tape measure and repeated repeatedly, each time, I would still get the same result; this leads to a reliable but invalid measure, as it's not my actual height.
Below are some types of reliability to look out for:
Validity is concerned with questions such as;
Does an experiment, or in our case, a tool, test what it's meant to test?
Is it really measuring what we're trying to measure?
Can we trust the results?
Let's take a look at some types of validity to look out for:
What else can we do?
Assemble a Dream Team to be part of the RFP review process. In an ideal scenario, you could draw on the following expertise from within your organisation: Organisational Psychologist, Data Scientist with an understanding of psychology, Machine Learning Engineer. Or partner with a HR Tech consulting firm, like Talent Tech Solutions, who operate this process in all vendor RFP process and tech stack analysis. This can be for short engagements during the RFP review stages, when the initial data modelling is set up pre-implementation, or to conduct a full analysis of your existing tech stack.
Ask the Right Questions
The Society for Industrial and Organizational Psychology (SIOP) suggests the following questions to ask a vendor when evaluating an AI assessment tool:
How do you model the task-based and team-based requirements of the job?
How do you define the human skills and traits that are relevant for selecting job applicants and predict performance?
How have you validated the tool in a way that complies with legal and professional standards?
What specific AI methods are employed in your product (i.e., regression or classification)?
How is data gathered and prepared when developing the model(s)?
What empirical evidence can you share that supports the reliability, validity, and of your AI tools (e.g., test–retest reliability, validity coefficients, and group mean differences)?
Have you compared your results with traditional alternatives or other AI tools?
How confident can I be that your results will apply in my organisational setting?
At what point are humans involved in the final deliverable or outcome?
What if your organisation's new tool is rolled out globally?
Cross Cultural Validation should be an important question to raise with your vendors for those who work in global organisations. The internal and construct validity of any assessment tool scales when transposed to a culture or context outside of that which it was established for is essential when adapting or implementing the technology in a culturally or linguistically different region. How well does the Vendors technology maintain its validity and reliability?
Keen to learn more?
The Society for Industrial and Organizational Psychology (SIOP) has an excellent white paper series on the role of AI in the future of work.
Google Scholar is a great place to start a search for scholarly based studies and articles.
MIT Sloan Management Review often has excellent white papers with the latest in research applied to the workplace.
Join us the final part of this series as we look at Ethics and Trust, asking the question ‘How well do you know your HR technology stack?’
Chamorro-Premuzic, T., Winsborough, D., Sherman, R., & Hogan, R. (2016). New Talent Signals: Shiny New Objects or a Brave New World? Industrial and Organisational Psychology, 9(3), 621-640.
Navarro, D.J., Foxcroft, DR, & Faulkenberry, T.J. (2019). Learning Statistics with JASP: A Tutorial for Psychology Students and Other Beginners.
Woods, SA, Ahmed, S, Nikolaou, I, Costa, AC & Anderson, NR 2020, 'Personnel selection in the digital age: a review of validity and applicant reactions, and future research challenges', European Journal of Work and Organizational Psychology, vol. 29, no. 1, pp. 64–77.