You’re in the C-suite and you’ve been approached by many #AI vendors promising that their #algorithm can streamline, optimize and revolutionize your business. But what’s the right strategy? #leadership #csuite #CEO

 

You’re in the C-suite and you’ve been approached by many AI vendors promising that their AI-based algorithm can streamline, optimize and revolutionize your business. But your sage self asks: How do you know that you’re buying the right algorithm? How do you discern which ones are fads and which ones will hurt your brand?

Sure, you can go back to school for that data science and machine learning degree, but what you really need is a summary of the most important key points. Here are a few considerations before you think about an actual algorithm audit to check what AI system you’re buying.

What is the intended use?

It goes without saying that you know what you want in an AI-powered solution. Instead, aim to understand an AI’s intended use so that you’ll understand its limitations. This prevents a scenario where you think you’re buying a car, but you’re getting a bicycle.

What is the percentage of false positives?

AI-powered machines are designed to generate automated decision making and can produce multiple models for any given problem and measure their accuracy. Of course, we all prefer the most accurate model.

However, as we’ve seen with COVID-19, the range of these models hasn’t been helpful.

During a Rachel Maddow interview, New York Governor, Andrew Cuomo expressed his frustration, “We only know what the project models say & the models say different things. When do you hit the top of the curve? Some say 7 days, some say 21 days. The range in these models are maddening. When you’re trying to plan, it’s very hard.”

AI-powered technologies are often based on data drawn from past behavior and aren’t prepared to deal with massive shifts in behavior because of a horrible pandemic or financial crisis. So when the present suddenly stops looking like the recent past, algorithms end up performing much worse than expected.

Another example is of a popular UK online grocery store, Ocado, which saw it’s website traffic spike 4x because of COVID-19. The company’s cybersecurity software alerted that this spike could only be a denial-of-service attack, and blocked the new transactions. Luckily, humans intervened.

And finally, circumstances without historical data, understanding context is even more crucial. Companies often provide accurate ratings in lab settings, but not in real life. We’ve seen this with facial recognition and Covid-19 testing. Real life is much more complicated than the controlled settings of a lab, virtual machine, or sandbox.

How are protected groups defined in the algorithm?

More often than not, AI-powered algorithms unintentionally harm protected groups. Examples of protected groups include race, gender, age, religion, disability, and more. Keep in mind, you also don’t have to be within protected group parameters for the algorithm to exclude prospective clients. In other words, an unvetted algorithm can deselect a unique demographic, and you’ll find yourself in a predicament where you leave money on the table. 

And because the algorithm isn’t selecting the right group, we end up targeting the wrong group.

Between 2013 and 2015, approximately 40,000 Michigan residents were victims of a faulty computer system, MiDAS that wrongly accused them of defrauding the Unemployment Insurance Agency.

In 2012, Idaho implemented an automated system that resulted in enormous cuts to disability services available to Medicaid recipients. In 2016, a federal court found that the system was unlawfully arbitrary, unfair, and lacked due process. The “court’s decision orders specific remedies, requiring Idaho Department of Health and Welfare (IDHW) to develop a plan to ensure that all of the participants have people to help them in getting all assistance they are entitled to, mandating that IDHW to identify the standards it uses to make assistance determinations and ordering testing to ensure the reliability and accuracy of IDHW’s automated systems.”

In Houston, algorithms were used to evaluate teachers’ performance and Houston teachers were able to overturn the system on due process grounds. They successfully argued that because the vendor considered the evaluation system a trade secret, they were denied the right to use the data to understand or improve their performance.

Last year when Apple launched its own branded MasterCard, the algorithm determining the cardholder’s approval favored men over females. In one millionaire’s case, despite having a higher credit score than her husband, he received 20x the credit limit than she did. Husbands took to Twitter to denounce the sexist algorithm. Meanwhile, Apple’s customer service reps insisted that they’re not discriminating, but that it was “just the algorithm.”

The same thing happened to us. I got 10x the credit limit. We have no separate bank or credit card accounts or any separate assets. Hard to get to a human for a correction though. It’s big tech in 2019.

— Steve Wozniak (@stevewoz) November 10, 2019

How is technology tracked, over time, for fairness, bias, efficacy?

As automated systems continue to be implemented, a practice for continuous monitoring for accuracy is essential. Data in an algorithm remains fixed in one identity while humans are continuously evolving. 

Meanwhile, the District of Columbia continues to use a criminal-sentencing AI algorithmic tool to assess the risk for youth violence in the juvenile justice system even though it was found to be racially discriminatory. And in Allegheny County, Pennsylvania, where officials plan to assign each child and family a “risk score” at birth as a “family screening predictive tool” to prevent child abuse.

Another relevant consideration is if humans will be involved in an algorithmic decision. As always, we can expect human involvement to lower risks. However, it is often more complex and costly for the organization. It’ll require training because human involvement can also bring in an additional form of bias.

If you are considering an AI system purchase or integrating an algorithm with your organization, please get in touch through info@eticasconsulting.com to find out how our team can help your business. Strategic execution and expert input in the early phases can save time, your budget and prevent fines. And if you want to get to know us better, you can watch this short video too or download our Algorithm Audit Guide (only available in Spanish for now).