📈 Q2 Earnings Alert! Plan ahead with key data on upcoming stock reports - all in 1 placeSee Calendar

For Minorities, Biased AI Algorithms Can Damage Almost Every Part Of Life

Published 2023/08/29, 17:40
Updated 2023/08/29, 18:00
For Minorities, Biased AI Algorithms Can Damage Almost Every Part Of Life

For Minorities, Biased AI Algorithms Can Damage Almost Every Part Of Life

by Arshin Adib-Moghaddam, SOAS, University of London

Bad data does not only produce bad outcomes. It can also help to suppress sections of society, for instance vulnerable women and minorities. This is the argument of my new book on the relationship between various forms of racism and sexism and artificial intelligence (AI). The problem is acute. Algorithms generally need to be exposed to data – often taken from the internet – in order to improve at whatever they do, such as screening job applications, or underwriting mortgages.

But the training data often contains many of the biases that exist in the real world. For example, algorithms could learn that most people in a particular job role are male and therefore favour men in job applications. Our data is polluted by a set of myths from the age of “enlightenment”, including biases that lead to discrimination based on gender and sexual identity.

Judging from the history in societies where racism has played a role in

establishing the social and political order, extending privileges to white males –- in Europe, North America and Australia, for instance –- it is simple science to assume that residues of racist discrimination feed into our technology.

In my research for the book, I have documented some prominent examples. Face recognition software more commonly misidentified black and Asian minorities, leading to false arrests in the US and elsewhere.

Software used in the criminal justice system has predicted that black offenders would have higher recidivism rates than they did. There have been false healthcare decisions. A study found that of the black and white patients assigned the same health risk score by an algorithm used in US health management, the black patients were often sicker than their white counterparts.

3rd party Ad. Not an offer or recommendation by Investing.com. See disclosure here or remove ads.

This reduced the number of black patients identified for extra care by more than half. Because less money was spent on black patients who have the same level of need as white ones, the algorithm falsely concluded that black patients were healthier than equally sick white patients. Denial of mortgages for minority populations is facilitated by biased data sets. The list goes on.

Machines don’t lie?

Such oppressive algorithms intrude on almost every area of our lives. AI is making matters worse, as it is sold to us as essentially unbiased. We are told that machines don’t lie. Therefore, the logic goes, no one is to blame.

This pseudo-objectiveness is central to the AI-hype created by the Silicon Valley tech giants. It is easily discernible from the speeches of Elon Musk, Mark Zuckerberg and Bill Gates, even if now and then they warn

us about the projects that they themselves are responsible for.

There are various unaddressed legal and ethical issues at stake. Who is accountable for the mistakes? Could someone claim compensation for an algorithm denying them parole based on their ethnic background in the same way that one might for a toaster that exploded in a kitchen?

The opaque nature of AI technology poses serious challenges to legal systems which have been built around individual or human accountability. On a more fundamental level, basic human rights are threatened, as legal accountability is blurred by the maze of technology placed between perpetrators and the various forms of discrimination that can be conveniently blamed on the machine.

3rd party Ad. Not an offer or recommendation by Investing.com. See disclosure here or remove ads.

Racism has always been a systematic strategy to order society. It builds, legitimises and enforces hierarchies between the haves and have nots.

Ethical and legal vacuum

In such a world, where it’s difficult to disentangle truth and reality from untruth, our privacy needs to be legally protected. The right to privacy and the concomitant ownership of our virtual and real-life data needs to be codified as a human right, not least in order to harvest the real opportunities that good AI harbours for human security.

But as it stands, the innovators are far ahead of us. Technology has outpaced

legislation. The ethical and legal vacuum thus created is readily exploited by criminals, as this brave new AI world is largely anarchic.

Blindfolded by the mistakes of the past, we have entered a wild west without any sheriffs to police the violence of the digital world that’s enveloping our everyday lives. The tragedies are already happening on a daily basis.

It is time to counter the ethical, political and social costs with a concerted social movement in support of legislation. The first step is to educate ourselves about what is happening right now, as our lives will never be the same. It is our responsibility to plan the course of action for this new AI future. Only in this way can a good use of AI be codified in local, national and global institutions.

Arshin Adib-Moghaddam, Professor in Global Thought and Comparative Philosophies, SOAS, University of London

3rd party Ad. Not an offer or recommendation by Investing.com. See disclosure here or remove ads.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post For Minorities, Biased AI Algorithms Can Damage Almost Every Part Of Life appeared first on TechFinancials - Reliable Tech News.

Read more on TechFinancials

Which stock should you buy in your very next trade?

AI computing powers are changing the stock market. Investing.com's ProPicks AI includes 6 winning stock portfolios chosen by our advanced AI. In 2024 alone, ProPicks AI identified 2 stocks that surged over 150%, 4 additional stocks that leaped over 30%, and 3 more that climbed over 25%. Which stock will be the next to soar?

Unlock ProPicks AI

Latest comments

Loading next article…
Risk Disclosure: Trading in financial instruments and/or cryptocurrencies involves high risks including the risk of losing some, or all, of your investment amount, and may not be suitable for all investors. Prices of cryptocurrencies are extremely volatile and may be affected by external factors such as financial, regulatory or political events. Trading on margin increases the financial risks.
Before deciding to trade in financial instrument or cryptocurrencies you should be fully informed of the risks and costs associated with trading the financial markets, carefully consider your investment objectives, level of experience, and risk appetite, and seek professional advice where needed.
Fusion Media would like to remind you that the data contained in this website is not necessarily real-time nor accurate. The data and prices on the website are not necessarily provided by any market or exchange, but may be provided by market makers, and so prices may not be accurate and may differ from the actual price at any given market, meaning prices are indicative and not appropriate for trading purposes. Fusion Media and any provider of the data contained in this website will not accept liability for any loss or damage as a result of your trading, or your reliance on the information contained within this website.
It is prohibited to use, store, reproduce, display, modify, transmit or distribute the data contained in this website without the explicit prior written permission of Fusion Media and/or the data provider. All intellectual property rights are reserved by the providers and/or the exchange providing the data contained in this website.
Fusion Media may be compensated by the advertisers that appear on the website, based on your interaction with the advertisements or advertisers.
© 2007-2025 - Fusion Media Limited. All Rights Reserved.