Despite it, says Kabrina Chang, robot-led job interviews are here to stay because of their efficiency
To cinematically (and comedy) depict the soullessness of corporations four decades ago, The survivors had Robin Williams fired from his job – by a parrot. Does the 21st-century version involve being interviewed – and maybe rejected for a job – by a robot?
Nine out of ten companies are now using technology – increasingly interviewing through artificial intelligence (AI) programs – in hiring. The process requires candidates to record themselves answering timed questions and submit the video for algorithmic analysis of their words, tones, and even facial features. Some applicants were told via email that they didn’t get the job because they had never dealt with a human interviewer.
This dehumanizing experience and the biases of human programmers embedded in the algorithms have had a backlash. Amazon scrapped a computer program used to check resumes in 2018 after it was found to discriminate against women.
On the other hand, Unilever said its AI technology helped produce the most diverse attitudes ever, says Kabrina Chang (CAS’92), associate dean of diversity, equity and inclusion at the Questrom School of Business and clinical associate professor for business law and ethics. Chang’s labor law course includes sessions on technology in hiring, and she says more than half of her students have experienced such interviews. BU Today asked her to speak about whether we have inevitably entered a brave new world of automated surveys and what impact this might have.
With Kabrina Chang
BU today: A robot hardly seems like the best introduction to a workplace. Why do companies do this?
Cabrina Chang: A number of reasons, most notably an easier and cheaper way to search through thousands of resumes. With the companies I’ve looked at – and it’s the same from my students’ experience – it’s usually your first pass. A company may have a GPA requirement; that will eliminate a lot of people. Then you get 10,000 resumes for a job at Unilever, and Unilever doesn’t have the power to go through them. They use an AI program to create a first screen.
You have an interview on your phone and you are watching yourself. Then the AI does what it does to weed out a bunch of people. The companies I’ve looked at end up having people who talk to applicants when you pass them. [But] You could be outside without ever speaking to a human.
BU today: What do you teach about this in your class? Are you for or against this approach or in between?
Cabrina Chang: I’m in between We look at some videos from companies that offer it. We look at it as an information gathering opportunity, just as we look at drug testing and job content testing, but always with that bias and discrimination perspective. Knowing what we know about bias in AI, is this a good management practice or a risky one? We know there is bias in AI based on who is teaching the machine.
In the first semester after the lockdown – the classes were hybrid – we had a student in the room who helped operate the zoom. In my labor law class, our supervisor was a Vietnamese student. When we first started talking about AI, he said, “When we talk to an authority figure, they tell us where I’m from [to] be still and have a flat effect. If I did that with AI, they would sort me out immediately” because it could show that you lack animation, energy and motivation.
So it wasn’t just the built in bias based on skin color. It was the bias based on who you are and how you animate yourself. That led to a conversation about don’t you go to the game [the AI program]? Won’t you do what you’re supposed to do because you know that’s what you’re supposed to do for this platform? How is that different from a [in-person] Interview?
BU today: Do your other students [who have had AI-interviews] do you have the same concerns?
Cabrina Chang: Neither of them loved it. Most of them said, “This is so strange. I’m having a conversation with myself on my phone.” They found this interaction very awkward. There were some concerns about bias, but the most common theme was: This is very strange.
BU today: Aren’t there any downsides for companies – they might be missing out on talent due to technology discrimination?
Cabrina Chang: I looked at Unilever, one of the first companies to use this. After using it for a while, they said they had hired their most diverse class yet – gender diversity, racial diversity, ethnic diversity. They had success stories. I wonder if [other] Companies use it and then don’t keep track of what’s happening. It’s more efficient, but they may not collect the data on who actually comes to work as a result of using this program.
You get a few takes [in an AI interview]. When I do my recording, my interview, I get maybe two or three tries. This is better than a face-to-face interview because you don’t have to repeat it. But it’s also weird, my students tell me, because they’re trying to become more mechanical, giving [the AI program] what they think it wants. The re-recording is great, but it also reinforces that weird, dehumanizing component.
BU today: If you were Head of Corporate Recruiting you would assume we need to do that, but we need good technology like Unilever and tracking it?
Cabrina Chang: Yes. As with so much technology, you can’t just rely on the technology, especially when it comes to something as subjective as attitude, because of the built-in bias. But also because of the things that technology doesn’t pick up on. If I were a big company I could see the benefits, but you need to do more than just capitalize on this. You need to test, test, test this product to make sure the distortion is as low as possible and keep a close eye on what’s happening.
You need to be responsive to this, knowing that it will turn off a certain percentage of applicants. If I don’t make progress [in the process], madness takes on a new meaning. I don’t know how you would avoid this result when using AI. That’s part of the price. That might not be such a big price. Not progressing [in the interview process] will be daunting no matter who tells you that you are not making progress.
I don’t see it going away. With the advent of zoom, people are much more comfortable speaking into a camera on their monitor. I think that broke down some of the barriers. But it’s still weird.
Explore related topics: