ChatGPT, the advanced AI chatbot that has been revolutionizing web searches and streamlining office tasks, has unfortunately found a new line of work as a social media scam artist.
The botnet, known as “Fox8”, was discovered operating on a social networking platform earlier this year by researchers from Indiana University Bloomington. The botnet, linked to several cryptocurrency websites, consisted of over 1,000 accounts, many believed to be using ChatGPT to generate social media posts and responses. The content created by these bots was designed to entice innocent users into clicking links leading to cryptocurrency-promoting sites.
Micah Musser, a researcher who has studied the potential for AI-driven disinformation, warns that the Fox8 botnet may be just the beginning of what we can expect to see as large language models and chatbots continue to rise in popularity.
The botnet, despite its vast reach, did not utilize ChatGPT in a highly sophisticated manner. Nonetheless, the discovery of the botnet has raised concerns about the ease with which advanced chatbots like ChatGPT can be exploited for nefarious purposes. It is feared that more advanced and less detectable botnets may already be in operation.
Filippo Menczer, a professor at Indiana University Bloomington who was involved in the research, says, “The only reason we noticed this particular botnet is that they were sloppy.” He emphasizes that ‘better’ bad actors would likely avoid the mistakes made by the creators of the Fox8 botnet.
OpenAI, the organization behind ChatGPT, has yet to comment on the discovery of the botnet. Their usage policy strictly prohibits the use of their AI models for scams or spreading disinformation.
ChatGPT and other advanced chatbots use large language models to generate text in response to prompts. With sufficient training data, computational power, and feedback from human testers, these bots can respond in impressively sophisticated ways to a variety of inputs. However, they also have the potential to spread hate speech, display social biases, and fabricate information.
A well-configured ChatGPT-based botnet would be challenging to identify and could effectively deceive users and manipulate social media algorithms. “It tricks both the platform and the users,” says Menczer.
Concerns about the potential misuse of technology like ChatGPT for disinformation campaigns are not new. OpenAI even delayed the release of a predecessor to ChatGPT due to such fears.
William Wang, a professor at the University of California, Santa Barbara, says it’s crucial to study these real-world criminal uses of ChatGPT. He believes that many spam webpages are now generated automatically, making it increasingly difficult for humans to spot such material. And, with AI continually improving, it will only get harder. “The situation is pretty bad,” he concludes.