Friday, April 19, 2024
HomeTechDangerous AI algorithms and how to recognize them

Dangerous AI algorithms and how to recognize them

When discussing the threats of artificial intelligence, the first thing that comes to mind are images of Skynet, The Matrix, and the robot apocalypse. The runner up is technological unemployment, the vision of a foreseeable future in which AI algorithms take over all jobs and push humans into a struggle for meaningless survival in a world where human labor is no longer needed.

Whether any or both of those threats are real is hotly debated among scientists and thought leaders. But AI algorithms also pose more imminent threats that exist today, in ways that are less conspicuous and hardly understood.

In her book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, mathematician Cathy O’Neil explores how blindly trusting algorithms to make sensitive decisions can harm many people who are on the receiving end of those decisions.

The dangers of AI algorithms can manifest themselves in algorithmic bias and dangerous feedback loops, and they can expand to all sectors of daily life, from the economy to social interactions, to the criminal justice system.

While the use of mathematics and algorithms in decision-making is nothing new, recent advances in deep learning and the proliferation of black-box AI systems amplify their effects, both good and bad. And if we do not understand the present threats of AI, we will not be able to benefit from its advantages.

[Read: The advantages of self-explainable AI over interpretable AI]

The characteristics of dangerous AI algorithms

We use algorithms to model to understand and process many things. “A model, after all, is nothing more than an abstract representation of some process, be it a baseball game, an oil company’s supply chain, a foreign government’s actions, or a movie theater’s attendance,” O’Neil writes in Weapons of Math Destruction. “Whether it’s running in a computer program or in our head, the model takes what we know and uses it to predict responses in various situations.”

But more and more of those models are being transferred from our heads to computers, thanks to advances in deep learning and the increased digitization of every aspect of our lives. Thanks to broadband internet, cloud computing, mobile devices, the internet of things (IoT), wearables, and a slew of other emerging technologies, we can collect and process more and more data about anything and everything.

This increased access to data and computing power has helped create AI algorithms that can automate an increasing number of tasks. Deep neural networks, which had previously been limited to research laboratories, have found their way into many areas that were previously challenging for computers, such as computer vision, machine translation, speech , and facial recognition.

So far, so good. What can go wrong?

In Weapons of Math Destruction, O’Neil specifies three factors that make AI models dangerous: opacity, scale, and damage.

Algorithmic vs corporate opacity

There are two aspects to the opacity of AI systems: technical and corporate. The technical opacity,also referred to as the black-box problem of artificial intelligence, has received much attention in the past few years.

In a nutshell, the question is, how do we know an AI algorithm is making the right decision? This question is becoming more critical as AI finds its way into loan application processing, credit scoring, teacher rating, recidivism prediction, and many other sensitive fields.

Many media outlets have published articles that depict AI algorithms as mysterious machines whose behavior is unknown even to their developers. But contrary to what the media portrays, not all AI algorithms are opaque.

Traditional software, often referred to as symbolic artificial intelligence in AI jargon, are known for their interpretable and transparent nature. They are composed of hand-coded rules, meticulously put together by software developers and domain experts. They can be probed and audited, and an error can be traced to the line of code where it has occurred.

In contrast, machine learning algorithms, which have become increasingly popular in recent years, develop their behavior by analyzing many training examples and creating statistical inference models. This means that the developers don’t necessarily have the final say on how the AI algorithms behave.

But again, not all machine learning models are opaque. For instance, decision trees and linear regression models, two popular machine learning algorithms, will give clear explanations of the factors that determine their decisions. If you train a decision tree algorithm to process loan applications, it can provide you with a tree-like breakdown (thus the name) of how it decides which loan applications to confirm and which to reject. This provides developers with a chance to discover potentially problematic factors and correct the model.

loan application decision tree