News

Using AI against racist robots

4 min
17-01-2024
Text Lise Wouters

It’s been making headlines in recent months, and for good reason: artificial intelligence (AI) has evolved enormously in a short amount of time and offers endless opportunities. Although it makes our lives easier in many ways, it can also be dangerous when discriminatory patterns emerge. This is what researchers at research group ACRAI (Antwerp Center on Responsible AI) try to prevent by using Explainable AI.

 

From Siri to Tesla

 

Everyone’s talking about it, but how can we actually define AI? ‘In general, artificial intelligence is considered as the capacity a computer program or machine has to think and learn,’ Toon Calders (Faculty of Science) explains. ‘In that sense, it’s the opposite of natural intelligence in animals and people.’

 

AI also is a field that tries to make computers self-learning, so they can function without human intervention. Examples include your mobile phone’s voice assistant and self-driving cars.

 

Predictive models, a type of AI

 

ACRAI, part of UAntwerp, focuses on making predictive models understandable and transparent. Those models are automatically created based on large amounts of data, which makes them a type of artificial intelligence. ‘They can ultimately serve as a replacement for human intelligence,’ Calders explains.

quote image

It all sounds very appealing, but predictive models can be unethical or even discriminatory.

Daphne Lenders

Such predictive models can, for instance, be used for earlier detection of cancers in medicine, but also in logistics or finance. It would make a banker’s life a lot easier if, instead of having to do all of the calculations themselves, AI could simply predict how much someone can borrow.

 

How AI can be discriminatory

 

‘It all sounds very appealing,’ says Daphne Lenders (Faculty of Science), ‘but predictive models can be unethical or even discriminatory. Not because they were consciously made to be by their designers, but because those stereotypes and fallacies are already in the data. After all, data comes from people, and people often discriminate without realising it themselves.’

 

Take the commotion surrounding the Apple Card a few years ago. Behind this credit card, there’s an algorithm that decides how much money someone can spend. But when reports started coming in claiming that the algorithm allocated less credit to women, Apple could neither confirm nor deny this. The system was self-learning and Apple wasn’t sure what their algorithm had learnt exactly. Another example is the COMPAS instrument, which was to predict the likelihood of recidivism by defendants. The system gave a much higher risk score to black people, reducing their chances of being released on bail while waiting for their trial.

quote image

Explainable AI can offer more transparency, thereby detecting and eliminating undesirable patterns, aka discrimination. It makes AI better and society fairer.

David Martens

The solution: Explainable AI

 

So, it’s important to know about AI’s decision process: why and how is a decision taken? This is why ACRAI has developed methods explaining predictive models. After all, often you cannot deduce this from the model itself, due to the large amount of data and the model’s complexity. To make sure the analyses are correct, ACRAI therefore uses… more AI.

 

‘Explainable AI reveals the patterns of complex models,’ explains David Martens (Faculty of Business and Economics). ‘So in the Apple Card example, Explainable AI might demonstrate that the predictive AI is taking clients’ genders into account… or that it isn’t.’ Once Explainable AI reveals any discrimination, specialists can intervene by improving – or simply no longer using – the software.

 

It’s over for racist robots

 

A similar pitfall exists for facial recognition software. Take Facebook, which automatically recognises faces in pictures and suggests tagging friends. The AI that does the facial recognition relies on datasets that contain examples of faces. ‘The problem?’ Lenders ponders. ‘Such datasets often contain a disproportionate number of pictures of white people, resulting in a model that only considers one specific portion of society.’

 

If it takes a bit longer to tag someone with a non-white skin colour on Facebook, that might not be such an issue. But if the police were to use discriminatory facial recognition software, that’s an entirely different matter. ‘Explainable AI can offer more transparency, thereby detecting and eliminating undesirable patterns, aka discrimination,’ Martens says. ‘It makes AI better and society fairer.’

quote image

The larger the database, the more efficient the decisions by the AI systems and the smaller the chances that minorities remain invisible.

Toon Calders

Can you sue a robot?

 

When it comes to human errors, people can be held accountable. But what about errors by robots? ‘Robots don’t have a legal personality, which means they can’t be sued for being discriminatory or for other reasons,’ says Martens. ‘But the owner or user of a robot can be held accountable.’

 

And how about the collection and management of data? ‘That always requires permission,’ says Calders. ‘But the larger the database, the more efficient the decisions by the AI systems and the smaller the chances that minorities remain invisible. This decreases the risk of discrimination.’

Share this article