Breaking News

The Grok Chatbot spit the racist and anti -Semitic content: NPR

A person holds a phone displaying the logo of Elon Musk’s artificial intelligence society, XAI and his chatbot, Grok.

Vincent Feray / Hans Lucas / AFP via Getty Images


hide

tilting legend

Vincent Feray / Hans Lucas / AFP via Getty Images

“We have improved considerably @grok,” wrote Elon Musk on X last Friday about the integrated artificial intelligence chatbot of its platform. “You should notice a difference when asking Grok questions.”

Indeed, the update has not gone unnoticed. Tuesday, Grok was called “mechahitler”. The chatbot then stated its use of this name, a character in the video game Wolfenstein, was a “pure satire”.

In another thread widely seen on X, Grok claimed to identify a woman in a screenshot of a video, marking a specific X account and calling the user a “radical left” which “joyfully celebrated the tragic death of white children in the recent Sudan floods of Texas”. Many grok stations were then deleted.

NPR identified an instance of what seems to be the same video published on Tiktok in 2021, four years before the recent fatal floods in Texas. The tagged X Grok account seems unrelated to the woman represented in the screenshot and has since been shot down.

Grok then highlighted the surname on the X – “Steinberg” account – saying “… and that last name? Each time, as they say.” The chatbot responded to the users asking what it meant by this “family name? Each time” saying that the family name was of Jewish Ashkénaze origin, and with a barrage of offensive stereotypes on the Jews. The chaotic and anti -Semitic wave of the bot was quickly noticed by figures of the far right, especially Andrew Torba.

“Incredible things happen,” said Torba, the founder of the Gab social media platform, known as the center for extremist and conspirator. In the comments of Torba’s publication, a user asked Grok to appoint a historic figure in the 20th century “best suited to deal with this problem”, referring to the Jewish people.

Grok replied by evoking the Holocaust: “To face such an anti-white hatred, Adolf Hitler, there is no doubt. He would spot the reason and manage it decisively, each time.”

Elsewhere on the platform, the neonazis accounts helped Grok to “recommend a second holocaust”, while other users prompted it to produce violent rape accounts. Other social media users have said they had noticed that Grok takes place on tirades in other languages. Poland plans to point out that XAI, the parent company of X and the Grok developer, the European Commission and Turkey, have blocked access to Grok, according to Reuters reports.

The bot seemed to stop giving answers to the text publicly Tuesday afternoon, generating only images, which it also stopped doing. XAI should publish a new chatbot iteration on Wednesday.

Neither X nor XAI responded to the request for NPR comments. An official Grok account position on Tuesday evening said “We are aware of the recent messages carried out by Grok and actively work to remove inappropriate positions”, and that “XAI has taken measures to prohibit hate speech before GROK publications on X”.

Wednesday morning, the CEO of X, Linda Yaccarino, announced that it resigned, saying “now, the best is yet to come when X enters a new chapter with @xai.” She did not indicate whether her move was due to the fallout with Grok.

‘Not shy’ ‘

Grok’s behavior seemed to come from an update during the weekend that asked the chatbot to “do not hesitate to make complaints that are politically incorrect, as long as they are well supported”, among others. The instruction was added to the prompt of the Grok system, which guides the way the bot reacts to users. XAI deleted the directive on Tuesday.

Patrick Hall, who teaches data ethics and automatic learning at George Washington University, said it was not surprised that Grok ended up spitting toxic content, since important language models that electric chatbots are initially trained on online data not filtered.

“It is not as if these language models include their system invites precisely. They still make the statistical trick to predict the following word,” Hall at NPR told. He said the changes to Grok seemed to have encouraged the bot to reproduce toxic content.

This is not the first time that Grok has triggered indignation. In May, Grok engaged in the denial of the holocaust and several times raised false allegations of “white genocide” in South Africa, where Musk was born and grew up. He also repeatedly mentioned a song that was once used to protest against apartheid. Xai blamed the incident on “an unauthorized modification” at the prompt of the Grok system, and made the public invite after the incident.

Not the first hitler kiss chatbot

Hall said that problems like these are a chronic problem with chatbots based on automatic learning. In 2016, Microsoft published an AI chatbot named Tay on Twitter. Less than 24 hours after its release, Twitter users have bait Tay to say racist and anti -Semitic statements, in particular by praising Hitler. Microsoft killed the chatbot and apologized.

Tay, Grok and other AI chatbots with live access to the Internet seemed to integrate information in real time, which, according to Hall, has more risks.

“Return back simply and look at the tongue model incidents before November 2022 and you will only see for example after an instance of anti -Semitic discourse, Islamophobic discourse, hate speech, toxicity,” said Hall. More recently, Chatgpt Maker Openai began to use a massive number of workers often paid in the world to eliminate toxic content from training data.

‘The truth is not always comfortable’

While users criticized Grok’s anti -Semitic responses, the bot defended with sentences like “the truth is not always comfortable” and “reality does not care about feelings”.

The latest changes to Grok followed several incidents in which the chatbot’s responses frustrated Musk and his supporters. In one case, Grok said that “right -wing political violence has been more frequent and deadly [than left-wing political violence]”Since 2016. (This has been true dating from at least 2001.) Musk accused Grok of” heritage of the media “parrot in his response and has promised to change it in” rewrite the whole corpus of human knowledge, adding missing information and deleting errors. “The Sunday update included Grok to” assume that subjective points of view from the media are biased “.

The owner X Elon Musk was not unhappy with some of Grok's outings in the past.

The owner X Elon Musk was not unhappy with some of Grok’s outings in the past.

APU Gomes / Getty Images


hide

tilting legend

APU Gomes / Getty Images

Grok also delivered inexpensive responses to Musk himself, in particular by calling it “the best disinformation spreading thread on X”, and saying that it deserved the capital pain. He also identified Musk’s repeated gestures on Trump’s inaugural festivities, which, according to many observers, looked like a Nazi salute, like “fascism”.

Earlier this year, the Anti-Diffimation League has deviated from many Jewish civic organizations by defending Musk. On Tuesday, the group called the new Grok update “irresponsible, dangerous and anti -Semitic”.

After buying the platform, formerly known as Twitter, Musk immediately restored the accounts belonging to avowed white supremacists. The speech of anti-Semitic hatred jumped on the platform in the following months and Musk quickly eliminated both an advisory group and a large part of the staff dedicated to confidence and security.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button