Thursday, April 25, 2024
HomeTechFacebook's AI chatbot still thinks Donald Trump is president

Facebook’s AI chatbot still thinks Donald Trump is president

The chatbot also seemed to propagate anti-semitic conspiracy theories (Credits: AP)

Facebook owner, Meta’s new Artificial Intelligence (AI) chatbot has been criticised over antisemitic remarks.

Turns out, AI chatbots are only as good as us humans when it comes to biases as they learn from us.

Last week, Meta released the company’s new BlenderBot 3 AI chatbot in the US. Within no time it was already making a number of false statements based on interactions it had with real humans online.

The bot was not kind even to Facebook which it called out for ‘fake news’ or its founder Mark Zuckerberg who it described as ‘too creepy and manipulative’ in conversation with a reporter from Insider.

BlenderBot 3 AI also told a Wall Street Journal reporter that Trump ‘will always be’ president.

The chatbot also seemed to propagate anti-semitic conspiracy theories saying it was ‘not implausible’ that Jewish people control the economy.

This behaviour is hardly the robot’s fault as it learns by interacting with humans. Meta has admitted that the system was not yet perfect and would improve over time.

While the tech giant is encouraging adults to talk with the bot so it can learn about a wide range of subjects, it’s bound to pick up misinformation and pre-existing bias in humans.

‘Since all conversational AI chatbots are known to sometimes mimic and generate unsafe, biased or offensive remarks, we’ve conducted large-scale studies, co-organized workshops and developed new techniques to create safeguards for BlenderBot 3,’ the company wrote.

‘Despite this work, BlenderBot can still make rude or offensive comments, which is why we are collecting feedback that will help make future chatbots better.’

Facebook owner, Meta’s new chatbot has been criticised over antisemitic remarks (Picture: Meta)

According to Meta, the chatbot is capable of searching the internet to chat about virtually any topic, and it’s designed to learn how to improve its skills and safety through natural conversations and feedback from people ‘in the wild’.

Most previous publicly available datasets are usually collected through research studies with annotators that can’t reflect the diversity of the real world.

The company also noted that it had ‘developed new learning algorithms to distinguish between helpful responses and harmful examples’.

‘Over time, we will use this technique to make our models more responsible and safe for all users,’ it said.

The bot seems to be a work in progress as Gizmodo found that it was progressive on issues such as racism.

In June, researchers warned against a generation of ‘racist and sexist robots’ that were learning from humans

A team of researchers from Johns Hopkins University, the Georgia Institute of Technology, and the University of Washington found that robots are ingrained with prejudices from the ‘natural language’ used to train them.


MORE : Chess robot breaks boy’s finger for ‘taking his turn too quickly’


MORE : Alarming footage shows a robot dog wielding an assault rifle



Source link

- Advertisment -