Algorithms in recruiting – how to prevent gender bias.

Facebook
Twitter
LinkedIn

In 2018, Amazon‘s Artificial Intelligence for checking resumes was a prime example of gender bias in AI systems. We can also observe something similar with the planned algorithm of the Austrian labor market service AMS, which supposedly discriminates against certain groups of people. Even though such systems are only intended to be supportive, they raise questions and criticism, especially with regard to gender bias.

Affected Perception in Sourcing.

Gender bias, or gender-related distortion, describes the interference of knowledge and perception and thus the influencing of research results or data. It can occur in many areas and includes both research into problems primarily affecting men, ignoring social or biological gender, and the different assessment of the same behaviors or characteristics of women and men.

In this regard, this type of bias plays a major role, especially in HR. In our last post, we talked about ATS, Applicant Tracking Systems, among other things. There, too, algorithmic support is increasingly used and the absence of gender bias cannot be guaranteed. We asked ourselves some questions in this context:

Should algorithms be allowed to make decisions about career opportunities for different population groups? Can neutrality in this decision making be guaranteed to eliminate gender bias? And what steps can be taken to achieve this?

We explore these and even more questions about algorithms in personnel selection in this blog post.

When talking about artificial intelligence (AI), the term machine learning is often used synonymously. In fact, machine learning is a sub-area of the technology surrounding artificial intelligence. It describes the ability of algorithms to make decisions based on training data and to identify correlations.

What could this training data look like? It has to be topic-related and relevant. In Amazon’s case, it was a collection of resumes to automate a review and pre-selection of them. The lack of diversity in the training data mainly affected female applicants, as female terms and names were not considered relevant. Thus, the data was topic-specific and relevant, but not sufficient to create an all-inclusive and inclusive training profile.

In addition, at its base, an algorithm was always written by humans, and thus cannot be infallible. Is this true?

Cognitive distortion: stereotyping.

In fact, the human brain is prone to errors – especially since many decisions and processes happen unconsciously and automatically. HR Today speaks of so-called cognitive biases. One of these is stereotyping. We often have to decide very quickly whether a person we are getting to know seems trustworthy or not. Unconsciously, this can also occur in algorithms – Amazon is a good example here: when it became known that male applicants were given preferential treatment over female applicants, the consideration of gender was removed from the algorithm. Women were nevertheless excluded more frequently because the algorithm still reacted to “typically female” leisure activities or all-girls schools and filtered out these resumes accordingly.

On salary and self-perception.

If we look at the female applicant side, we see the following opinions:

Women sometimes indicate a lower salary than men in their salary expectations.

If an AI were trained on this training data, women could tend to be suggested lower-paying jobs even though they would perform the same.

So does this mean that women are fundamentally less confident? A scientific study of job ads on social networks found that women are less likely to click on certain ads than men. This may be due to a number of reasons, says Persoblogger. In some cases, the search is for “programmers” [In German it would solely refer to male programmers, which is not the case in English] and women sometimes do not even feel addressed. Other criteria, such as inflexible working hours, can indirectly exclude mothers.

Fair decisions without machine learning.

Our algorithm does not use machine learning for further development, as we are aware that this can lead to unintentional gender bias. Together with our algorithm expert and co-founder Dr. Stefan Frehse, who also works as a consultant for the German Research Center for Artificial Intelligence, we explained in more detail how exactly we counteract this.

The matched.io algorithm compares the entries made by developers with the job entries of the companies. Within 3 seconds, the algorithm creates a pre-selection of suitable proposals for companies and candidates. Each profile is divided into different areas, which are weighted differently.

Since our founders have many years of experience in the areas of software development and human resources consulting, we can contribute this knowledge.

We look for the individually suitable solution.

We look at each user as an individual. The suggestions are played out randomly in order to exclude bias. After rejecting a proposal, each party can provide feedback. Were the salary expectations too high or the offer too low? Does the corporate culture not fit after all? Or is there a different idea about the use of skills and technologies? Our algorithm takes this feedback into account and can thus get to know and understand developers and companies better. As a result, the suggestions become more and more suitable. Important: The feedback is not training data, but individually considered answers to make the algorithm’s calculations even more accurate. The feedback, unlike training data, is not applied to all users, but only in relation to feedback givers.

We live a culture without gender bias and work every day to ensure this continues in our product.

Technical recruiting that is inclusive.

For the use of algorithms in the selection of employees, it is therefore important that decisions made by the computer are continuously questioned. Inequalities must be detected and removed immediately. This is the basis of matched.io, which is why we are constantly working on the further development of our algorithm. Stefan has already told us something about the next big change. In the future, the matched.io algorithm will be able to provide suggestions for market-driven salaries, in-demand tech skills, and personal development based on the skills specified. In this way, we will bring developers and companies together even more intelligently.

We see this as another step towards improving the world of technical recruiting and making it more inclusive.

lina

lina

Any questions?

Feel free to check out the other tutorials or our FAQ!

Feel free to leave feedback at feedback@matched.io and let us know how helpful you find the tutorials!

Your matched.io Team

Sign up for our Newsletter

Want tech updates sent straight to your inbox?

By clicking send you’ll receive occasional emails from matched.io Newsletter. You always have the choice to unsubscribe within every email you receive.
en_US