5 mins
You have 0 further articles remaining this month. Join LeadDev.com for free to read unlimited articles.

If we aren’t careful, dangerous biases can manifest in software applications. Here are some ways to be vigilant to fight bias in tech.

In 2017, a video titled “Is this soap dispenser racist?” took the internet by storm.

In the video, a black person in a restroom is seen struggling to get soap from an automatic soap dispenser. For those unaware as to why and how this happens: automatic soap dispensers are activated when light is refracted off of a surface that is placed beneath it. Given that people with darker skin absorb more light (and reflect less of it), these dispensers often fail to detect the presence of a hand.

The video in question went viral, and the internet had a good laugh – but it’s not a laughing matter. This video encapsulates a small snapshot of the effects of biased tech. And six years later, we are no closer to removing this bias from the industry.

Systems are still biased today

Automatic soap dispensers are just the tip of the iceberg. More and more, life-saving products are being developed without diversity in mind.

Pulse Oximeters are an example of this. Used as key medical tools to screen oxygen percentages in people during the height of COVID-19 (though they were regularly used before and after the fact), these devices clip onto people’s fingers and measure the percentage of oxygen in the blood. Loosely put, the device emits two wavelengths of light into the finger; deoxygenated blood absorbs one wavelength, while oxygenated blood absorbs the other.

But despite the importance of this medical device, oxygen screenings on people with darker skin weren’t accurate 12% of the time. As a result, millions of people obtained false negative results.

But, it’s not just Pulse Oximeters. Many heart-rate sensing smartwatches also don’t work on darker skin. In a community which is more prone to heart disease, this is tech that could help save lives. And it’s not just skin with extra melanin that these devices don’t function well on – if you have tattoos on your wrist those smartwatch heart sensors might not work either.

While not being able to get soap from a dispenser might be inconvenient, what happens if it’s a self-driving car that doesn’t recognize you as human because you have darker skin? Or in a wheelchair or using other assistive devices? Or pregnant or pushing a baby stroller?

Wider implications of bias in tech 

This is not just a problem for people of color, it's a problem of diversity and having a broad representation in the room when designing and developing products. 

In 2010, Time Magazine printed an article entitled Are Face-Detection Cameras Racist? In this article, they explored how some digital cameras’ face-detection features would often report “did someone blink” when photographing Asian faces.

Additionally, even though accessibility is often something that is taken into account when working on products, it took the industry a long time to do so. The same goes for localization and the need for products to be available in multiple languages. 

Now, many products will not ship if they do not support accessibility and localization requirements. Thinking about the difference between gender and skin color should become just as much of a consideration.

How to eliminate bias in tech 

First of all, we know that we have a pipeline problem in tech. There aren’t enough technologists from diverse backgrounds or women in the field to make sure they are represented in every room. So speak up! Ask questions about every single product and feature:

  • “Have we thought about people with darker skin tones?” 
  • “Have we thought about women?” 
  • “Have we thought about people with very high or low voices?”
  • “Have we thought about people who are very short or very tall?”
  • “Have we thought about people with tattoos?”
  • “Have we thought about people with heterochromia iridium (different colored eyes)?”

Machine learning (ML) models will only know what they have been programmed to.

If your self-driving car was taught to identify people because it was only shown photos of white people, then it clearly has no chance of knowing that anyone other than a white person is a human.

Only 13.4% of the US population is black, but that doesn’t mean that a ML model should see faces that are black 13.4% of the time. Black people are human 100% of the time. And this has wider implications for cars functioning in countries where there is a larger black population.

If your machine learning application is for voice recognition, then make sure you are training its model with voices of all different inflections, articulations, and pronunciations. Think about people with speech impediments or accents.

As you go into alpha and beta testing, make sure the pool of testers is diverse. Oftentimes, companies will go into alpha and beta with their employees, which, in the tech space, sadly means that there probably aren’t many underrepresented individuals testing the products. 

Try this exercise: gather ten people and have each one put together a list of what they think a truly diverse pool of users looks like. Then compare notes. I bet everyone has someone that they’ve missed on that list. The fact that I’m a black woman doesn’t mean that I am the best suited to make that list – I have my own biases. But when I am one of the inputs into that thought exercise, I can bring my unique perspective to combine with other people’s unique perspectives. 

Use that list when you’re training your models. Use that list when you are briefing alpha and beta testers.

No company sets out to create a product that’s going to be called “racist” – but it happens over and over again. You can be the voice that helps your company avoid that from ever taking place.