The promise of a more equitable society is sold as ‘just around the corner’. We are persistently led to believe that leadership opportunities for women will soon be abundant, Black people will no longer face discriminatory police practices and ‘Mohammed’ will no longer be rejected from the interview process before it even begins. Yet in reality we are on a different trajectory. Algorithmic bias is not merely a technical malfunction but a reflection and reinforcement of longstanding social inequalities. Understanding and addressing these biases is essential to uphold ethical principles of justice and equality in our increasingly digital society.
Technological advancements, far from dismantling inequality, seem set to perpetuate centuries-old discrimination. They are embedding exclusionary practices deeper into the digital fabric entrenching bias within everyday algorithms.
In a historical context discrimination and bias are not new phenomena, they have been deeply woven into the fabric of society for many centuries. Systems of privilege have consistently favoured certain social groups, creating entrenched inequalities through slavery, segregation and institutionalised racism, sexism and classism. Laws and societal norms often upheld these inequalities. Jim Crow laws in the United States enforced segregation, women worldwide were denied the right to vote or work in certain professions well into the 20th century. These historical injustices have left deep scars in societal norms and continue to establish patterns of bias that shape many assumptions in today’s behaviours.
At its core it is an ethical question. Discrimination is a violation of fundamental human rights. Bias in algorithms is not just some glitch in the code, it cuts right to ideas of equality, inclusivity and justice promoted as the foundation of modern democratic societies. Ethical frameworks exist to provide a moral basis for addressing bias, but the gap between those advocating these practices and the technology companies implementing discriminatory systems remains wide. As Alexandria Ocasio-Cortez stated at a MLK Now event: “Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions. They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”(1)
Who holds responsibility in this landscape? Advocates call for transparency and fairness. Technologists highlight complexity and scale. Regulators struggle to keep up. Public understanding remains limited. While viewpoints differ, the consequences of inaction are becoming more visible.
Our transition towards a more digital society has already shown the risks of creating systems that perpetuate all forms of discrimination. The biases embedded in historical data are now being encoded into algorithms that influence crucial aspects of our lives. Algorithms process data without context, often magnifying existing inequalities and creating new forms of discrimination.
How do these biases manifest in the systems? Algorithmic discrimination is already visible across sectors.
Take facial recognition technology. Sold as a neutral security tool yet despite that promise, it embeds and perpetuates significant biases. High-profile cases, such as the wrongful arrests of Robert Williams and Porcha Woodruff in Detroit, show the severe consequences of misidentification for Black individuals. In both cases, facial recognition systems matched them to crimes they did not commit. Research by MIT’s Joy Buolamwini revealed that systems from major tech companies like IBM, Microsoft and Amazon had markedly higher error rates for darker-skinned individuals, particularly women, compared to lighter-skinned counterparts. These findings have prompted cities like San Francisco to ban the use of facial recognition by city agencies, a decision driven by concerns over racial and gender bias, alongside wider questions about civil liberties.
Policing and sentencing algorithms follow a similar pattern. Built again for neutral reasons predictive analytics have also highlighted deep-rooted bias. In the United Kingdom, the Durham Constabulary’s Harm Assessment Risk Tool (HART) was built on historical police data already containing patterns of bias, which in turn amplified existing disparities in how different groups were policed. In the Netherlands, the SyRI system targeted low-income and immigrant communities for welfare fraud investigations. Courts eventually shut it down. In China, the Social Credit System uses opaque algorithms that penalise ethnic minorities and dissenting voices. Similar concerns have been raised in Australia and Canada, where predictive policing tools have disproportionately targeted Indigenous and minority communities. In the United States, the COMPAS algorithm, used to assess the risk of reoffending, has been shown to label African-American defendants as higher risk more often than white defendants, influencing bail and sentencing decisions.
Social media platforms have enabled discriminatory targeting in advertising. Facebook’s ad targeting tools once allowed advertisers to exclude specific racial groups from seeing housing, employment and credit ads, a clear breach of anti-discrimination laws and reports suggest similar patterns have been difficult to eliminate entirely.
In credit and lending, the issues vary by country but the mechanism is often the same, historical bias baked into data. In the UK, the Financial Conduct Authority found that credit scoring algorithms could unintentionally disadvantage minority groups. In Australia, big data models have disadvantaged Indigenous Australians. In India, microfinance algorithms have displayed gender bias, favouring men over women. Across Europe, bank credit scoring has been reported to show ethnic bias, reproducing discriminatory lending patterns. In Brazil, automated credit scoring systems have unfairly rejected applications from Black and mixed-race individuals.
Hiring practices also display similar patterns of bias. In the UK, algorithms have been shown to disadvantage candidates with ethnic-sounding names. In Australia, AI-driven platforms have often favoured male candidates for technical and leadership roles because of gender bias in the training data. In Germany, automated hiring tools have preferred younger male candidates, reflecting age and gender prejudice, while in Japan recruitment systems have disadvantaged foreign applicants in favour of domestic ones. Even global tech companies have faced issues, Amazon abandoned a trial recruitment platform after the algorithm repeatedly downgraded female candidates.
At the University of Washington, graduate student Kate Glazko documented how recruiters were using ChatGPT to summarise and rank CVs, and found that it consistently ranked resumes with disability-related honours and credentials lower than identical resumes without such honours. Only when the system was given explicit instructions to avoid ableism did this bias reduce for most of the disabilities tested.
The promise of a more equitable digital future will not realise itself. Many machine learning models function as ‘black boxes’ making them difficult to interpret. Or where there has been ‘transparency washing’, where organisations have superficially revealed algorithmic processes but it has not lead to meaningful reform. What would it take to truly examine how these systems are built, and to decide together how we want them to shape everyday life? To advocate for ethical reflection with regulatory oversight and technical innovations that ensure transparency is both effective and protective of individual rights.
The pervasive influence of algorithms necessitates a concerted effort to disrupt and reform, the entrenched harm is already apparent. Addressing the ethical dimensions and societal impacts of data discrimination is crucial. Emphasising the intersectionality of inclusivity and equity is essential to ensure the health of a democracy and prevent society from blindly continuing to reward the privileged few.
It remains to be seen if transparency in algorithms, if achieved, will lead to a meaningful reduction in bias, or if we are hard coded as humans to enable these power imbalances to find a way back in. Perhaps the challenge is not just to define better what fairness looks like but how to build fairer systems. Perhaps even recognise when fairness cannot be engineered. And perhaps, some systems are better abandoned entirely!
(1) Yes, artificial intelligence can be racist
REFERENCES CONSULTED AND USEFUL SOURCES:
Rolling Stone: These Women Warned Us About AI
Data Authenticity, Consent, and Provenance for AI Are All Broken: What Will It Take to Fix Them?
Cornell Chronicle: Social scientists take on data-driven discrimination
LSE: Data-driven discrimination: a new challenge for civil society
Data Discrimination: The Dark Side of Big Data
VOX: Algorithms and bias, explained
Washington University: ChatGPT is biased against resumes with credentials that imply a disability
Leave a Reply
You must be logged in to post a comment.