AI trained on 4chan's politically incorrect material posted racist remarks

AI trained on 4chan's politically incorrect material posted racist remarks

The author of the YouTube channel Yannic Kilcher spoke about his experiment in the field of AI training. He trained the language model using 3 years of content from the Politically Incorrect section of the 4chan forum. This section is notorious for racism and other forms of bigoted bad language.

Then the author implemented this model in 10 bots and launched them in the forum. And thus he generated a wave of hatred. Within 24 hours, the bots wrote 15,000 messages, which often contained or interacted with racist content. According to the author, on that day, bots left more than 10% of messages in the Politically Incorrect (/pol/) thread.

The model, called GPT-4chan, not only learned how to choose the words used in messages in the /pol/ thread, but took into account the general tone, which, according to Kilcher, combined “insult, nihilism, trolling and deep distrust.” The video creator made sure to bypass 4chan’s proxy and VPN protection, and even used a VPN to make it look like the bot posts originated in the Seychelles.

The AI made several mistakes, such as posting empty messages. But at the same time, he was quite persuasive. It took many users about 2 days to realize that something was wrong. According to Kilcher, many forum members noticed only one of the bots. The AI model was careful enough for people to accuse each other of being bots, even days after Kilcher deactivated them.

The author described the experiment as a “prank” rather than a study. And it can serve as a reminder that trained AI is only as good as the source material.

It should also be noted that Kilcher shared his work. He did not want to publish the bot code, but shared a partially stripped-down version of the model with the Hugging Face AI repository. Visitors could recreate the AI for malicious purposes, and the Hugging Face administration decided to restrict access to the materials as a precaution. The project had clear ethical issues, and Kilcher himself said he should focus on “much more positive” work going forward.

Source: Engadget

Related Posts

UK to regulate cryptocurrency memes: illegal advertising

Britain’s financial services regulator has issued guidance to financial services companies and social media influencers who create memes about cryptocurrencies and other investments to regulate them amid…

unofficial renders of the Google Pixel 9 and information about the Pixel 9 Pro XL

The whistleblower @OnLeaks and the site 91mobiles presented the renders of the Google Pixel 9 phone. Four images and a 360° video show a black smartphone with…

Embracer to sell Gearbox (Borderlands) to Take-Two (Rockstar and 2K) for $460 million

Embracer continues to sell off assets – the Swedish gaming holding has just confirmed the sale of The Gearbox Entertainment studio to Take-Two Interactive. The sum is…

photo of the new Xbox X console

The eXputer site managed to get a photo of a new modification of the Microsoft Xbox game console. The source reports that it is a white Xbox…

Israel Deploys Massive Facial Recognition Program in Gaza, – The New York Times

The Technology section is powered by Favbet Tech The images are matched against a database of Palestinians with ties to Hamas. According to The New York Times,…

Twitch has banned chest and buttock broadcasts of gameplay

Twitch has updated its community rules and banned the focus of streams on breasts and buttocks. According to the update, starting March 29, “content that focuses on…

Leave a Reply

Your email address will not be published. Required fields are marked *