Watchdog finds AI tools can be used unlawfully to filter candidates by race, gender

The UK’s data protection watchdog finds that AI recruitment technologies can filter candidates according to protected characteristics including race, gender, and sexual orientation.

The Information Commissioner’s Office also said that in trying to guard against unfair bias in recruitment, AI tools could then infer such characteristics based on information in the candidate’s application. Such inferences were not enough to monitor bias effectively, and they were often processed without a lawful basis and without the candidate’s knowledge, the ICO said.

The findings are part of an audit [PDF] of organizations that develop or provide AI-powered recruitment tools between August 2023 to May 2024.

In a prepared statement, Ian Hulme, ICO director of assurance, said: "AI can bring real benefits to the hiring process, but it also introduces new risks that may cause harm to jobseekers if it is not used lawfully and fairly. Our intervention has led to positive changes by the providers of these AI tools to ensure they are respecting people’s information rights.

"Our report signals our expectations for the use of AI in recruitment, and we’re calling on other developers and providers to also action our recommendations as a priority. That’s so they can innovate responsibly while building trust in their tools from both recruiters and jobseekers."

While the research found many AI recruitment tool providers monitored the accuracy and bias, not all did. At the same time, a number also included "features in some tools [which] could lead to discrimination by having a search functionality that allowed recruiters to filter out candidates with certain protected characteristics."

In UK data protection law, "protected characteristics" include age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion and belief, sex and sexual orientation.

The audit found that tools "estimated or inferred" people’s gender, ethnicity, and other characteristics from their job application or even just their name, rather than asking candidates directly. "This inferred information is not accurate enough to monitor bias effectively. It was often processed without a lawful basis and without the candidate’s knowledge," the report said.

The report also revealed that tools sometimes collected far more personal information than was needed. "In some cases, personal information was scraped and combined with other information from millions of people’s profiles on job networking sites and social media. This was then used to build databases that recruiters could use to market their vacancies to potential candidates. Recruiters and candidates were rarely aware that information was being repurposed in this way," the report said.

Following its research, the ICO has produced a range of recommendations for developers and providers of recruitment tools which use AI. These include requirements already in laws such as processing personal information fairly, explaining the processing clearly, keeping personal information collected to a minimum, and not repurposing or processing personal information unlawfully. It also recommended providers and developers conduct a risk assessment to understand the impact on people’s privacy.

Worldwide, attention has been drawn to the implementation of AI-assisted recruitment tools in legal cases, policies, and new laws.

In April, the US Equal Employment Opportunity Commission (EEOC) allowed a claim against Workday to continue, arguing the HR and finance software vendor may qualify as an employment agency because of the way its AI tool screens applicants. The plaintiff in the case said he was turned down for every single one of the more than 100 jobs he applied for using the Workday platform and alleges illegal discrimination on the basis of race, age, and disability. Workday argues the case is without merit.

In 2022, the Biden administration and Department of Justice warned employers using AI software for recruitment purposes to take extra steps to support disabled job applicants or they risk violating the Americans with Disabilities Act (ADA).

Earlier this year, legal experts warned that using AI for recruitment was deemed a “high-risk” activity under the EU’s new AI Act, creating a number of obligations for developers. ®

{Categories} _Category: Applications{/Categories}
{URL}https://www.theregister.com/2024/11/08/ico_finds_ai_tools_can/{/URL}
{Author}unknown{/Author}
{Image}https://regmedia.co.uk/2024/11/07/shutterstock_facial_recog2.jpg{/Image}
{Keywords}{/Keywords}
{Source}Applications{/Source}
{Thumb}{/Thumb}

Exit mobile version