Machine-powered #algorithms are not going through the painful, rigorous intellectual discomfort that real humans undergo when actual learning actually takes place. #BlackLivesMatter #privacy

From apps to algorithms, emerging technologies can reinforce white supremacy and amplify social inequities. As of late, automated decision-making systems are marketed as revolutionary machine learning systems that improve and produce better results with more data. The key misnomer in this statement is that the machines are learning. 

Machine-powered algorithms are not going through the painful, rigorous intellectual discomfort that real humans undergo when actual learning takes place. 

When Humans Learn

When humans learn, there are moments of pleasure as well as pain. We’re all familiar with the following complaints:

Learners have to be corrected over and over again. Every time a learner makes a mistake, he learns something new. 

This is what’s missing when companies bypass an algorithmic audit. An algorithm can’t complain when the learning is difficult. 

During our collective awakening, if white friends need their black friends to educate them about racism, what does that mean for our algorithms that need to learn about race? 

Learning is about opening yourself up to people and experiences and seeking to understand. And when you’re talking about race, it involves making horrible mistakes and getting your feelings hurt and not closing the door when that happens.

B.L. Wilson

When Machine-Powered Algorithms Learn Without an Audit

Machine-powered algorithms are only as good as the data it has been trained on. And when  companies claim that their algorithms have been allegedly audited for bias, what that typically means is that they’ve only looked at how well the algorithm learned from the training data. 

Companies can and often provide accurate ratings in lab settings, but not in real life. We’ve seen this with facial recognition and Covid-19 testing. Real life is much more complicated than the controlled settings of a lab, virtual machine or sandbox. 

When algorithms aren’t interrogated and properly vetted, it further magnifies a dysfunctional, imbalanced system that is already designed to benefit one group and harm another. 

Need examples? 

But let’s not feign ignorance and abdicate our role and power in correcting systemic bias. Some claim that we don’t know enough about how machine powered-algorithms really decide, nor are developers fully aware of their underlying assumptions. It’s not necessarily possible to determine which algorithms are biased and which ones are not.

We applaud IBM’s CEO for taking a monumental step in ending the use of all facial recognition software. However, there are thousands of companies that use algorithms who can commit to being better with measurable goals. 

From our experience, we encourage the use algorithms, however just make sure your algorithm’s audited for racism, sexism, ageism and all forms of accidental discrimination. The build-it-now and fix-it-later mindset cannot take precedence over a automated generated decisions that will have cascading consequences on human lives.

If white friends need their black friends to educate them about racism, what does that mean for our #algorithms that need to learn about race? #BlackLivesMatter

The Throughline to Systemic Racism 

If you need a refresher on the systemic racism that’s driving the Black Lives Matter movement,  slaves were once synonymous with the ideology of ‘human capital’. Human resources have conveniently adapted and adopted that terminology into corporate culture. Slave patrols later became modern police departments. What about the insurance industry’s complicated past in the slave insurance business?

The throughline to systemic racism is clear and that bias is simulated in machine-powered algorithms that should not be making life altering decisions, because these algorithms have the intellect of an infant. 

Believing that algorithms are making discerning, adult-based automated decisions or that algorithms are neutral is a serious delusion we need to wake up from. 

“I can’t get the job I wanted,” she said, “because they suggested that I was a criminal.” @FrankPasquale: "These types of tools can be used to inform human judgment, but they should never be replacing human beings.” #government #unemployment #fraud https://t.co/znv7rR92YN

— Eticas Research & Consulting (@EticasConsult) June 11, 2020

Looking for K-Pop Superstars

We can even heed a simple lesson from the Dallas police. They wanted the public to submit photo, video, or text tips about possible crimes of illegal activity from protestors. Instead, the public flooded their app with K-Pop. 

To take this example one step further, if the Dallas police did not verify their photos, videos and text tips, their biometric algorithms would really start looking for K-pop stars in Dallas, TX. 

Algorithms Also Need to Adopt a Continuously Learning Mindset

If we adopt a continuously learning mindset, we won’t have to run away from the algorithms. We can scrutinize ourselves as well as our data and in the process, change both.

An algorithmic audit can teach us  what we don’t know and what we’re not even un/consciously biased about. Effectively, an audit generates space to correct what we got wrong. That’s how we can leverage algorithms to serve all humans, not just select groups. 

Everyone has the capacity to change. Yes, learning is painful. There will be stretch marks and bruises. However, we can’t harden ourselves to ignorance. 

Otherwise, we might just end up with a bunch of K-pop.