Berlin

November 4 & 5, 2024

New York

September 4 & 5, 2024

London

June 16 & 17, 2025

Driving inclusion with explainable artificial intelligence

Concepts and challenges towards responsible AI
May 07, 2021

You have 1 article left to read this month before you need to register a free LeadDev.com account.

Machine learning (ML) permeates every sphere of our lives.

How many decisions have been made about you today, this week, or this year by artificial intelligence?

Algorithms are being used all the time to make decisions about who we are and what we want. But AI isn’t just being used to make decisions about what products we want to buy or which show we want to watch next.

Complex social and political challenges are being automated as mathematical problems. AI is being used to help decide how much you pay for your car insurance; how good your credit score is; and if you are a potential suspect.

But these decisions are all being filtered through its assumptions about our identity, our race, our gender, and our age. How is that happening? The more socially complex a problem is, the more difficult is for machine learning systems to accurately make predictions and for us to understand their decisions.

The current situation is that machine learning systems are able to predict, but there is a lack of transparency and interpretability behind their behaviors, which leaves users with little understanding of how decisions are made by these models. And, as humans are the ones who train, deploy, and often use the predictions of machine learning models in the real world, it is important for us to be able to trust the model.

What about explainable AI (XAI)?

The good news is that we have made great advances in some areas of explainable AI.

Explainable AI systems are intended to self-explain the reasoning behind system decisions and predictions. An explanation is mainly understood as an interpretable model that approximates the behavior of the underlying black box, and allows users to understand why a certain conclusion or recommendation is made. Research in this area shows how machine learning algorithms automate and perpetuate historical and discriminatory patterns.

The bad news is that creating explainable AI is not as easy and simple as imagined.

Despite the interest in interpretability, there is not an agreement about what explainable machine learning is and how it should be evaluated.

Most explainability methods are focused on Deep Learning Neural Networks. Consequently, these methods have concentrated on generating visualizations and gradient maps that may not be interpretable for non-expert users or create real trust.

Looking at the wider picture

Besides that, most current researchers are not examining the wider picture, such as unquestioned assumptions in the field, historical injustices, and how AI shifts power. As Abeba Birhane says, fields of computing and data science are dominated by privileged groups which means that most of the knowledge being produced is reduced to the perspective, interest, and concerns of such a dominant group.

Meredith Broussard says that we suffer from ‘technochauvinism’, i.e., the belief that the technological solution is always the right one. Personally, as a researcher and software engineer, I had to shift from the idea of technology being our savior to understand its limitations and recognize the relevance and importance of social and ethical values for the evaluation of explainable machine learning methods. I have studied decolonization and sociological theories and their importance to understand facial recognition surveillance impacts on the Brazilian Black population.

I believe that if we really want to use technology as a tool to mitigate bias and create trustworthy AI systems, we need to have in mind that individuals and communities that are at the margins of society are disproportionately impacted by these systems.

This means that simply creating a fairness metric for an existing system may not be enough, rather, we have to question what the system is doing and understand its consequences. Why are we creating this in the first place? What is the goal?

The gap between what we think and what computers can actually do is huge. I’m very optimistic about technological progress, but it’s time to be more critical and realistic about what the computer science field can and can’t do.

We need people who can solve problems; we need people who face different challenges; we need people who can tell us what the real issues that need fixing are and help us find ways that technology can actually fix them.

‘Your future hasn’t been written yet. No one’s has. Your future is whatever you make it. So make it a good one.’ Doc Brown, Back to the Future

This is our chance to remake the world into a much more equal place. But to do that, we need to build it the right way from the get-go. We need people of different genders, races, and backgrounds. Only then can we hope to reimagine the use of technology to explore and substantiate a political vision that centers the interests of marginalized communities.

We need to think very carefully about what we teach machines, what data we give them, and what they can really achieve so they don’t just repeat our own past mistakes.

References

Birhane, Abeba. Algorithmic injustice: a relational ethics approach. 2021. https://doi.org/10.1016/j.patter.2021.100205

Broussard, Meredith. Artificial Unintelligence – How Computers Misunderstand the World. 2018