In a gendered society, individuals’ traits and merits are often arbitrated against the specific standards that are associated to gendered expectations, which is what we often label as ‘gender bias’. What is more concerning, however, is that given the representation gap between men and women in the societal roles and historical data, gender bias becomes more of an ‘androcentric bias’. This means that social standards are often benchmarked with men as the societal model.
The making of homemakers and breadwinners
Gender prejudices are often linked to the representation of the respective gender in social roles. Given the overrepresentation of men in the labour force, men are often associated with traits that display ‘prominence’, such as ‘outstanding’, or ‘unique’. On the other hand, as linked to the role of homemakers, women are often associated with agentic traits, such as ‘warm’ and collaborative.
These preconceptions are true also in the context of recruitment. Throughout the history, women’s position in society had been associated to the roles of homemaking. Although this gender-roles norm has somewhat changed in this day and age, its impact on the lingering prejudice still elicits subconscious gender bias in the modern-day recruitment arena. Women are prejudiced to be less competent and less productive in comparison to their male counterparts. In fact, with the same professional portfolio, hirers often favour male candidates more than female whilst describing men as ‘competitive’, ‘experienced’ and ‘ambitious’. The question becomes: are these the sorts of associations we want as social standards?
Hiring algorithms: minimising or maximising gender bias?
Persisting gender bias has given rise to de-humanising decision-making processes. Of course, the upsurge of automated decision-making appears to be a clear solution to this problem. The adoption of hiring algorithms may have looked like a silver bullet: first, it minimises human intrusion in decision-making, which supposedly minimises biases; and second, it elevates efficacy in evaluating a high volume of applications with minimum cost and maximum corporate benefits. In short, hiring algorithms should be able to form an ideal amalgamation of first-rate candidates based on impartiality. But is it actually true?
In 2014, the tech company, Amazon, constructed hiring algorithms to assess the suitability of applicants. The algorithms were fed and trained using their internal datasets over the past 10 years. The result was devastating. Amazon’s hiring algorithms discriminate against resumés that contain keywords indicating the candidates as women. These keywords include ‘all-women’s college, ‘female’ and so on. This was, of course, not intentional, nor did the algorithms introduce bias by itself. The Amazon case shows that hiring algorithms can capture and imitate the pertaining biases within the training datasets.
Decision-making algorithms such as these are designed to mimic how a human would, in this case, those a potential worker. Ultimately, as our human brains work with associations, so do algorithms. Without careful mitigation plans, algorithms are not immune to gender bias; in fact, in some cases, they may instead exacerbate gender bias.
The question remains: why and how did this happen?
There are at least three ways that hiring algorithms can introduce bias.
- Bias in the training data. This implies that without a balanced representation of protected groups, algorithms may include protected attributes as one of the variables of ‘success’.
- Bias in the system. Algorithms are prone to making correlational errors between one variable to another; however, we know that correlation is not causation. For instance, if most lawyers play golf, it does not necessarily follow that law graduates’ chance of success is to be predicted by whether or not they play golf.
- Bias in past human decisions. As algorithms are programmed by humans and are trained with datasets that contain human decisions, they tend to mirror human behaviour’s patterns. This infers its proneness to repeating human biases in their predictions.
Fairness? Discrimination? Public policy?
Hiring algorithms can take various forms and can function in many ways — implying that the levels and the sources of biases are diverse. It behoves us to pinpoint the areas of bias and it is indeed critical that we question the implication of these biases.
It highlights the need to reconceptualise the meaning of ‘fairness’ and ‘discrimination’ in the age of digitisation. With algorithms taking a significant role in decision-making, evidently, discrimination is not merely limited to ‘direct’ and ‘indirect’. There are levels of accountability and transparency that need to be enforced. In response to the varying theories on algorithmic biases and gender discrimination, however, there needs to be a formalised action where digital policy can be systematically executed in algorithmic decisions, both in process and outputs.
In light of this, research does not intend to formulate new theories nor act as a direct response to the issue; rather, it brings to our attention what this means for gender equality and fairness. With a lack of rigorous study in this domain there is a call for in-depth research to ‘explain’ the algorithmic ‘black-box’ in recruitment. We may also want to identify a balance between the efficiency and impartiality of algorithmic decisions and the socio-political consciousness, explicability and accountability of human-led decisions. It brings to light the importance of consciousness and intentions in pursuing fairness and gender equality in the age of automation.