Friday, May 17, 2024
HomeTechEvolution makes us treat AI as if it were human: we must...

Evolution makes us treat AI as if it were human: we must kick the habit

Artificial intelligence does not have the same comprehension powers as a human (Image: Getty)

Artificial intelligence (AI) pioneer Geoffrey Hinton recently resigned from Google, warning of the dangers of technology ‘to become smarter than us’.

His fear is that AI will one day be able to ‘manipulate people into doing what it wants’.

There are reasons why we should care about AI. But we often treat or talk about AIs as if they were humans. Stopping this and realizing what they really are could help us maintain a fruitful relationship with technology.

In a recent essay, the American psychologist Gary Marcus advised us to stop treating AI models like people. By AI models, he means extensive language models (LLMs) such as ChatGPT and Bard, which are now used by millions of people daily.

He cites egregious examples of people “over-attributing” human-like cognitive abilities to AI that have had a variety of consequences. The funniest was the US Senator who claimed that ChatGPT “taught itself chemistry”. The most heartbreaking was the report of a young Belgian man who was said to have taken his own life after lengthy conversations with an AI chatbot.

Marcus is right that we need to stop treating AI like people: conscious moral agents with interests, hopes, and desires. However, many will find this difficult or nearly impossible. This is because LLMs are designed, by people, to interact with us as if they were humans, and we are designed, by biological evolution, to interact with them in the same way.

good imitators

The reason why LLMs can mimic human conversation so convincingly stems from insight by computing pioneer Alan Turing, who realized that a computer doesn’t have to understand an algorithm in order to run it. This means that while ChatGPT can produce paragraphs full of emotive language, it doesn’t understand any words in any sentence it generates.

LLM designers successfully turned the problem of semantics (the arrangement of words to create meaning) into statistics, combining words based on their past frequency of use. Turing’s idea echoes Darwin’s theory of evolution, which explains how species adapt to their environment, becoming increasingly complex, without needing to understand anything about their environment or themselves.

Artificial intelligence can produce comprehensible text, but it doesn’t understand the words themselves (Image: Getty/iStockphoto)

Cognitive scientist and philosopher Daniel Dennett coined the phrase “competition without understanding,” which perfectly captures the ideas of Darwin and Turing.

Another important contribution from Dennett is his “intentional stance.” This essentially states that to fully explain the behavior of an object (human or non-human), we must treat it as a rational agent. This manifests most often in our tendency to anthropomorphize non-human species and other non-living entities.

But it is useful. For example, if we want to beat a computer at chess, the best strategy is to treat it as a rational agent that ‘wants’ to beat us. We can explain that the reason the computer castled, for example, was because it ‘wanted to protect its king from our attack’, without any contradiction in terms.

Alan Turing and Charles Darwin (Images: Getty)

We can speak of a tree in a forest as ‘wanting to grow’ towards the light. But neither the tree nor the chess computer represent those ‘wishes’ or reasons for themselves; just that the best way to explain their behavior is to treat them as if they did.

intentions and agency

Our evolutionary history has provided us with mechanisms that predispose us to find intention and agency everywhere.

In prehistoric times, these mechanisms helped our ancestors avoid predators and develop altruism towards their closest relatives. These mechanisms are the same ones that make us see faces in the clouds and anthropomorphize inanimate objects. It doesn’t hurt us when we mistake a tree for a bear, but many things happen the other way around.

Evolutionary psychology shows us how we are always trying to interpret any object that might be human as human. We unconsciously adopt the intentional stance and attribute all our cognitive abilities and emotions to this object.

With the potential disruption that LLMs can cause, we need to realize that they are simply probabilistic machines with no intentions or concerns for humans. We must be very attentive to our use of language when describing the human-like feats of LLMs and AI in general. Here are two examples.

The first was a recent study who found ChatGPT to be more empathetic and gave “higher quality” answers to questions from patients compared to those from doctors. Using emotive words like ’empathy’ for an AI predisposes us to grant it the ability to think, reflect and genuinely care for others, which it lacks.

The second was when GPT-4 (the latest version of ChatGPT technology) was released in march, abilities of greater skill in creativity and reasoning were attributed to him. However, we are simply seeing increased ‘competition’, but still no ‘understanding’ (in Dennett’s sense) and definitely no intentions, just pattern matching.

There are concerns that AI “goes rogue”, but the bigger problem is that “people who use it for ‘bad actors’ will manipulate it (Image: Getty/iStockphoto)

safe and secure

In his recent comments, Hinton raised a near-term threat of “bad actors” using AI for subversion. We could easily imagine a rogue regime or multinational deploying an AI, trained on fake news and falsehoods, to flood public discourse with misinformation and deep-fakes. Scammers could also use AI to take advantage of vulnerable people in financial scams.

Last month, Gary Marcus and others, including Elon Musk, signed an open letter calling for an immediate pause on further LLM development. Marcus has also called for an international agency to promote safe, secure and peaceful AI technologies, calling it ‘Cern for AI’.

Also, many have suggested that anything generated by AI should be watermarked so there is no doubt as to whether we are interacting with a human or a chatbot.

Regulation in AI lags behind innovation, as is often the case in other fields of life. There are more problems than solutions, and the gap is likely to widen before it narrows. But in the meantime, repeating Dennett’s phrase “competition without understanding” might be the best antidote to our innate compulsion to treat AI like humans.

Neil Saunders, Senior Lecturer in Mathematics, University of Greenwich

This article is republished from The conversation under a Creative Commons license. Read the Original article

FURTHER : The ‘godfather of AI’ warns of the dangers to humanity when he leaves Google

FURTHER : ChatGPT boss Sam Altman concerned about the use of AI to interfere with elections



Source link

- Advertisment -