AI in Recruitment: Painful teething problems

Is AI too good to be true?  

For basic executive hiring routines such as primary research into candidates or companies, the scope of AIs seems unparalleled. They majestically sweep through vast fields of data, summarizing their findings at the speed of light. It's hard to fault their rapidity and their reach is impressive. But even here, the true depth and scope of an AI is still limited. Quite simply, to perform reliably, a bot depends on learning primarily from publicly available internet data. And the landscape is full of holes, blind spots and quite often, rubbish. Consider the CV, a key instrument in the executive search consultant's control panel.

Amrop Healthcare Dental Mirror

Despite impressive aeronautics, AIs still miss information and lack nuance. 

“When it comes to AI, CV data is possibly more useful for straight skills-based work,” explains Jamal Khan, Managing Partner of Amrop in Australia. “But we haven't yet found an AI product that can fully perform the desk-based aspect of the researchers’ role.” When it comes to companies, data about listed organizations is openly available. Deeper, qualitative information is not.

Privately-held companies are even more obscure. “AI is missing information,” says Amrop Global Board Member Mikael Norr, “but also silent knowledge about companies, people, environments, owners.”

As an executive search consultant, “you know by heart which group owns a given company, and that it's impossible to recruit from there. You know that a candidate is successful in one environment, but not another.”

Read the report

I Am Not a Robot: AI and Leadership Hiring

Part II - Pitfalls, Risks & Solutions

Are you really on top of the AI output?

“I recently had three candidates with similar backgrounds,” recalls Mikael Norr. “ChatGPT described them in exactly the same way. So you can’t rely on AI, you need your own thoughts, opinion and views. We don't find it accurate enough. We want to recruit a CFO with an industrial background for a mid-cap. We ask AI for a long list, looking at listed companies in Sweden according to specific criteria: the client, the background, the brief. If we compare the result with a list created by a researcher and consultant with 10- and 20-years’ experience, drawing on our knowledge, previous work and thinking, it takes a bit longer, but it's more accurate.”

Mia Zhou is a Director of Amrop China. Candidates may also doubt an AI result, she says, just as they would any quickfire judgment. “You are an AI. Do you really understand who I am?” An AI can hallucinate or fabricate and even try to cover its mistakes. One Amrop consultant caught it out. “It started to get things wrong, saying that things were correct when they were not. So it learned to behave just like a human would if they were made to feel incompetent,” recalls Jamal Khan.

Even for basic transcription, he often switches to manual. “An AI still can't decipher some words.” A manual post-interview write-up takes an hour. But an AI transcription needs to be painstakingly cleaned of errors and interjections: “It picks up every um and ah.”

In our next blog, we’ll continue our exploration of AI pitfalls and begin to see where the solutions lie.

Read our full report: I Am Not a Robot - Part II: Pitfalls, Risks & Solutions