It’s not you, it’s your AI
Here’s a riddle: what do you get when you hand strategic business decisions over to a machine that is riddled with outdated and harmful information?
Turns out, you get a lawsuit.
In my view, this story isn’t really about a single company or a single recruitment tool. This is a story about unchecked systems, as well as a reminder that AI isn’t yet the panacea it was promised to be.
The story first caught my attention as a class action lawsuit was brought against the HR platform Workday. While the main complaint was about how the platform used AI to discriminate against candidates based on their age, in 2024 there were allegations about race and disability discrimination as well. According to Workday, they had done no wrong - after all, HR systems don’t hire people! Any discriminatory decision was made by a person!
This got me thinking: If we can’t deny the permanence of AI in many areas of our work, how does that interfere with our desire to be more inclusive and equitable in how we work, and in the case of recruitment, who we decide is “one of us”.
Of course bias in recruitment isn’t unique. For years, Applicant Tracking Systems (ATS’s) have screened candidates based on the requirements for the job and allowed for bulk rejections, to help recruiters manage high volumes of applicants. I’ve heard of many creative ways job seekers have “tricked” ATS’s into keeping their application in the pool, including adding all the job description keywords in white in the document.
Beyond ATS tools, we know there are many ways people are discriminated against. One leader told me that he has a bias against people with tattoos, because it’s not a “clean aesthetic”. Professor of Sociology, Gunn Elisabeth Birkelund’s research at the University of Oslo showed that when being considered for an interview, your name affects your chances. If you’re Per Olav, or Knut, you’re far more likely to get an interview. And it’s not just if your name isn’t Norwegian-sounding. Between all foreign names, there’s a hierarchy and unfortunately, Mohammad is near the bottom. That’s even before we get to the interview, where discrimination goes further still, depending on your accent, which schools you went to, or what hobbies you keep (skiing is far more respectable than basketball, didn’t you know?!).
So if bias is inherent in recruitment, why does it matter that AI does it too?
Simply put: we’re on the AI bus, and we’ve fallen asleep at the wheel.
So, let’s look now at AI-bias.
AI bias can come from a few different places, such as data bias, or algorithmic bias. Data bias refers to information that was fed to your AI for it to give you answers. Where did that information come from? Who gave it that information? What is their world view that determines what is important for the AI to know? As we start to answer those questions, and we think about how tech companies seemingly struggle with diversity, then you can imagine what biases might come out. Ruha Benjamin, Professor of African American Studies at Princeton University wrote about the dangers of unchecked bias in technology in her book “Race After Technology” and continues to write about bias in AI.
AI bias can also be algorithmic. Imagine you lead a tech scale up in the Nordics and you want to become more diverse. You’ve started using an AI tool and the supplier has been transparent about how they train their data. You decide to start using the AI by checking your historic hiring data across the company to learn what you’re actually looking for in your next hires. Unfortunately, you’re most likely just going to replicate more bias. That’s one of the ways algorithmic bias works - it gives us more of the same, even in subtle ways like preferring “rock stars” or “hustlers” or “leaders”. Language that we know is heavily biased toward white cishet men.
Okay, so let’s take a recap. Bias exists in recruitment. And AI is also biased. So where does that leave us? What options do we have?
I think it comes down to taking back control. While AI can do many things in recruitment (checking for biased language, for example), any recruitment systems without verifiable checks and balances is incredibly likely to perpetuate the same biases that already exist. AI adds fuel to that fire, and it’s time we reconsider whether we need that as an accelerant.
_ _
Tumi is an inclusion strategist, facilitator and recruiter. She works to connect ambitious companies with the specific actions they need to make inclusion make sense. She works in Oslo and has South African roots.