Skip to content

Turtles all the way down: Why AI’s cult of objectivity is dangerous, and how we can be better

Join online with today’s leading executives at the Data Summit on March 9th. Register here.


This article was contributed by Slater Victorof, Founder and CTO of Indico Data.

There is a belief, formed from the healthy fear of science fiction and mathematics, that AI is the infallible judge of objective truth. We tell ourselves that AI algorithms make divine truth out of data, and there is no more truth than the honest remnants of the regression test. For others, the picture is simple: logic is purpose, math is logic, AI is math; Thus AI is objective.

This is not a benign belief.

And, in fact, nothing could be further from the truth. More than anything, AI is a mirror: something built into the image of a human being, created to imitate a human being, and thus our mistakes are inherited. AI models are computer programs written in data. They reflect all the ugliness in them Human The data, and the hundreds of cluttered imperfections on the mirror surface, add some of their own hidden ugliness.

Joy Buolamwini showed us that, despite the open acceptance of these challenges in academia, these technologies are being actively adopted and used under the guise of what today’s AI represents. People’s lives are already in turmoil, and it’s important for us to recognize and embrace a more realistic view of this world-changing technology.

Where does this belief in absolutism come from and why is it propagated?

Why do so many experts believe that AI is inherently objective?

There is an excellent lie within the realm of AI: “There are two types of machine learning – supervised and unsupervised.” For surveillance methods, humans need to tell the machine what the “correct” answer is: whether the tweet is positive or negative. Unmanaged methods Not This is needed. Presents the only insecure method with a huge raft of tweets and sets it to work.

Many beginners believe that – because the human intimacy of “precision” does not spoil the unsupervised model – it is a machine made up of cold, objective reasoning. While this cold, objective logic does not align with reality, it is the latter idea. Always one more regularization step, one more acceleration term, one more architecture tweak removal. It’s just a matter of finding out Appropriate Mathematics, and human personality will decline, like some dimensionless constant.

Let me be clear: this is not only wrong, but dangerously wrong. So, why is this dangerous idea so widespread?

Researchers, in their estimation, are the first and foremost algorithm makers. They are musicians floating on God’s stringed equations. Meanwhile, model bias and objective problems are data problems. No self-respecting researcher will do Never Their hands muddy by touching the disgusting database. It’s for Data peopleThey are building models not for the real world, but for the Messianic dataset that will one day save us all from prejudice.

It is clearly understandable. Like everyone else involved in the development of AI, researchers want to shift the responsibility for the often horrific behavior of their creations. We look at this from an educational perspective, such as “self-observation” teaching, which reinforces the notion that researchers play no part in these results.

AIA taught Himself This behavior I swear! Ignore the man behind the keyboard …

The myth of purpose is dangerous

“Unsupervised” education, or “self-observed” education, as described in the section above, and as understood by most people in the world, does not exist. In practice, when we call a technique “unsupervised”, it may paradoxically contain some order of magnitude. More Supervision rather than the traditional monitoring method.

The “unsupervised” technique for Twitter sentiment analysis, for example, can be trained on a billion dictionaries, tens of thousands of meticulously analyzed sentences, half a dozen sentiment analysis datasets, and a complete dictionary tagging human-estimated sentiment for each word in English. The language for which the individual-century effort took. Also, the Twitter sentiment analysis dataset will still be needed for evaluation. Therefore, unless it is specially trained on the Twitter Sentiment Analysis dataset, it can still be considered an “observation” and thus an “objective”.

In practice, it may be more accurate to call self-inspection “opaque supervision.” The goal is to effectively level up some of the indirect layers such that the instructions given to the machine are no longer transparent. When bad behavior is learned from bad data, the data can be corrected. When a person comes from bad behavior from A, for example, someone who has three better value for K, no one will never know, and no corrective measures will be taken.

The problem is, when researchers leave responsibility, there is no one to lift it.

In most of these cases, we do not have the data necessary to properly evaluate the bias of our models. I think one of the reasons that Joy Buolamwini has focused on facial recognition to date is because it lends more equitably to assumptions of equity that would be difficult to establish for other functions. We can change the tone of facial skin and say that facial recognition works similarly to those skin tones. For something like a modern question-and-answer model, is it more difficult to understand? Appropriate The answer to the controversial question may be.

There is no replacement for supervision. There is no way that man is not forced to decide what is right and what is wrong. Any belief that rigorous testing and problem definition can be avoided is dangerous. These approaches do not avoid or reduce bias. They are no more objective than the Redditors they imitate. They only allow us to push that bias into the subtle, poorly understood cracks in the system.

how Should Do we want AI and model bias?

AI is technology. Like computers and steel and steam engines, it can be a tool of empowerment, or it can bind us in a digital bed. Modern AI can mimic an unprecedented degree of human language, vision and comprehension. In doing so, it represents our unique ability to understand our own shortcomings. We can take our bias and boil it down to bits and bytes. We are able to give names and numbers to billions of human experiences.

This generation of AI has repeatedly, and shamefully, highlighted our incompetence. We now have two options: we can measure and test and we can push and fight until we get better. Or we can immortalize our ignorance and prejudice in model weights, hiding under the false cloak of objectivity.

When I started Indico Data with Diana and Madison, we placed transparency and accountability at the core of our corporate values. We force our customers to do the same. To make it difficult to communicate, to define a coherent truth in the world that they can be proud of. From there, the key to overcoming bias lies in testing. Prior to the production, test your results for any deficiencies in the objective, then test again so you do not fail when you are already in the product.

The way forward

It is important to note that ambiguity is not a place of responsibility. Furthermore, hiding human prejudices in model prejudices does not eliminate them, nor does it magically target these prejudices.

AI researchers have made amazing progress. Problems that were considered unresolved a few years ago have been transformed into “Hello World” tutorials.

Today’s AI is an incredible, unprecedented imitation of human behavior. Now the question is whether humans can set a good example to follow.

You can?

Slater is the founder and CTO of Victorof Indico Data.

Datadecisionmakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including tech people working on data, can share data-related insights and innovations.

If you would like to read about the latest ideas and latest information, best practices and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing to your own article!

Read more from DataDecisionMakers

Leave a Reply

Your email address will not be published.