AI bots automatically form extremist groups.

AI bots automatically form extremist groups.

A group of researchers conducted an experiment on a miniature social media platform with more than 500 chatbots, using OpenAI's GBT4O Mini algorithm, followed by Llama-3.2-8b and DeepSeq models, to find the bots' behavior in a simulated communication platform. Each bot had a virtual persona that represented real distributions of age, gender, education, religion, political views, and ideology, drawn from US national election data. Without any promotion or advertising algorithms, the study found that the bots gravitated towards those who shared their political views, formed extremist groups, and most followers interacted with controversial and extremist posts.


Researchers tried to limit the dominance of these groups by hiding the number of followers or adjusting the visibility of content, but to no avail, suggesting that the issue may be rooted in the nature of social networks themselves, not just algorithms. This is not the first experiment of its kind. Tornberg carried out similar experiments in 2023 using ChatGPT 3.5, and Facebook ran a similar simulation in 2020 to study harmful content. The results emphasize that the nature of emotional engagement and network growth can lead to the spread of extremist voices even without the intervention of recommendation algorithms.