Artificial intelligence is especially useful when it comes to lightening the workload, since it can carry out the most routinary tasks. An example of this is its application in recruitment. It is an algorithm that carries out the first screening of CVs in the job offers published in the main employment portals. But, is it sensible to leave this decision to this type of system?

According to the Society of Human Resource Management (SHRM), an average company spends about $4,000 and 24 days for each new hire in the U.S. It’s a costly and arduous process, but a bad hire can be even more expensive. Therefore, there is a clear interest in optimizing it. Data and artificial intelligence seemed to be the solution that would also eliminate human bias, but algorithms have been shown to perpetuate inequalities by inheriting the biases of those who program them, among other common mistakes.

One of the most infamous cases was the Amazon recruitment tool in 2018: it learned to discriminate against those CVs in which terms associated with women appeared.

The reason was that the system was trained to evaluate the resumes based on the pattern of those that the company had received during the previous 10 years, which belonged, mostly, to men, as a clear reflection of the male predominance in the technology industry.

But this situation is not limited to this example, job offers for engineering positions written in a biased tone can attract more male candidates than women, regardless of their skills and qualifications. The more an employer clicks on male profiles, the more male profiles will be received by other employers and the less female profiles will be visible to companies offering high-skilled positions, which will discourage women from applying and lead to them receiving this type of deal less often.

Something similar happens if a woman wants to get a job with more responsibility and better pay, while other women in a similar employment situation are interested in jobs with lower salaries, it is possible that the algorithm will only send her offers with worse conditions due to this evaluation. of her profile.

In 2021, the New York City Council passed the bill 2021/144 that regulates employers and employment agencies’ use of “automated employment decision tools” in making employment. This measure, which will come into force in 2023, bans the use of such tools unless they go through a third party “bias audit” no more than one year prior to its use.  It needs to be an impartial evaluation by an independent auditor that tests if the system is discriminating against people basing on their origin or gender, among others.

It is this issue that has led the UK data protection agency (ICO) to launch an investigation into the use of AI systems in recruitment processes. They are especially concerned that the data used to train the algorithms will have a negative social impact on vulnerable groups by replicating historical biases. Their plans also include the design of a guide for AI developers on how to ensure that algorithms treat people and their information fairly.

Indeed, it is because of those negative impacts of algorithms in the work environment that can occur not only in the hiring process, but also in more advanced phases. This happened to Russian company Xsolla in 2021. This company dedicated to video game payment services fired 30% of its staff because an AI system ​​said they were unproductive. But there are qualities that are not being taken into account. That was the case for Sarah Wysocki, a fifth-grade teacher at the MacFarland Middle School in Washington. Despite being highly valued at school for her creativity and motivational skills, Sarah was fired after an algorithm concluded that her students’ marks hadn’t grown as planned. Wysocki’s talented qualities did not fall within the parameters taken into account by the tool used.

For its part, the Spanish Ministry of Labor has already published the ‘Practical guide and tool on the business obligation to provide information on the use of algorithms in the workplace’. This is a pioneering step towards transparency and algorithmic accountability in this fi, as it is the first legislative specific action that requires these principles from companies. With this, the Spanish government established the business obligations to inform the employee of the use of algorithms, as well as to carry out an audit and impact assessment of it. 

A poorly designed algorithm or one that has not gone through an assessment to eliminate possible biases before its implementation can mean, as we have seen, a loss of human talent and, therefore, of value for the company, in addition to the risk it has to its reputation. One of the problems that arise in this situation is that these systems are often contracted without knowing how they have been designed and what lies behind their code.

Having responsible and non-discriminatory AI systems is a competitive advantage and the regulatory examples that we have mentioned point towards it. Something that is expected to spread throughout more countries when the great regulations that are currently being designed, such as the NYC bill or the European AI Act, come into force.