Ver la versión en español aquí
The public is generally familiar with the fact that Amazon.com, Google and other technology companies use computer programs and artificial intelligence to predict consumer behavior – think about the pop-up ads that seem targeted just to your interests. However, we recently learned that artificial intelligence (AI) recruiting tools may not be as robust in the hiring process. Amazon recently abandoned its effort to use an AI recruiting system. Amazon’s goal was to design computer programs that could instantly pick top talent from a pool of resumes. But the technology fell flat because it seemed, unintentionally, to favor male applicants.
The recruiting tool’s suspected gender bias stemmed from Amazon’s previous 10 years of hiring practices. The computer program examined successful resumes of a historically male dominated industry. As a result, it unwittingly penalized resumes with previously rare terms like “women’s” and downgraded graduates of all-women’s colleges. While the company used the recruiting engine to search for new hires, it never solely relied on its rankings. Ultimately, after losing hope for a gender and race neutral solution, executives scrapped the project.
When a tech giant like Amazon fails to produce an unbiased AI recruiting system, employers should be on guard.
New hiring technologies include:
- Video interview programs that analyze candidates’ facial expressions and speech tone
- AI tools that sift through public data on social media to determine how well a candidate will get along with current employees.
- Virtual reality systems that help recruiters visualize employee collaboration in order to find what specific skills and personality traits are needed for team projects.
- Candidate sourcing and resume screening applications, some of which target underrepresented groups based on gender, ethnicity, and veterans for example.
The world’s largest recruiting network is Microsoft’s LinkedIn, which algorithmically ranks candidates based on their fit for job postings. But even the VP of LinkedIn Talent Solutions, John Jersin, warns that the service is not a replacement for traditional recruitment methods. “I certainly would not trust any AI system today to make a hiring decision on its own,” he said. “The technology is just not ready yet.”
Even if an employer does not intentionally use technology to discriminate on the basis of a protected classification, it could still be liable for employment discrimination if the effect of its practices has a disparate impact on protected classes of job applicants.
Employers on the cutting edge of recruiting should be cautious when implementing AI for their hiring practices. Two takeaways for employers are: 1) there is no substitute for human interaction for the recruitment of talented employees who fit a company’s needs and 2) if you are an employer who solely uses AI recruitment technology to screen and source candidates, you may want to incorporate human oversight and auditing into the process to ensure that your hiring practices do not consist of unintended bias or discrimination.
*Special thanks to Thomas Raine, who assisted in the drafting of this post. Thomas is a third year Juris Doctor Candidate at the University of Miami School of Law.