Over the past several years, you’ve probably heard at least something about AI chatbots like ChatGPT. Since its development in 2022, ChatGPT’s popularity has grown rapidly alongside its accessibility and ability. Today, ChatGPT has the ability to perform a wide range of tasks like text generation, answering questions, and generating images. Since AI has become such a widely used tool, several companies have jumped on the bandwagon and created their own AI-based tools. Just a year after the initial release of ChatGPT in 2022, Elon Musk’s xAI released Grok, their own AI chatbot. Grok has many of the same capabilities as ChatGPT, including answering questions and generating content. Though, one major difference between the two is that ChatGPT excels in more polished content while Grok is built to be more conversational and creative.
Many people have praised Grok for its unique generative capabilities. However, experts have recently discovered that online threat actors are using this AI chatbot to enhance their scams. According to researchers, cybercriminals have started using Grok as a method of spreading phishing links by tricking the AI chatbot into providing malicious links in its responses. AI chatbots are only as smart as we train them to be, and they don’t quite yet have the ability to separate false, malicious information from real, accurate information. AI chatbots like Grok usually collect their information by scraping the web, so anything that has been posted anywhere can be fair game to be used in a response. Although AI developers have been working on solutions for issues like this, it is still an ongoing issue.
In this particular scam, cybercriminals post promoted clickbait videos on the X platform with a hidden malicious link embedded in the ‘from’ field below the video. Although X has protections against links in promoted posts, this scam circumvents this ban. The cybercriminals then ask Grok where the video is from, prompting the chatbot to provide the malicious link in its answer. They have effectively tricked Grok into providing faulty information to its users. Researchers found that these scam posts were receiving millions of impressions on X since Grok is a legitimate tool that is embedded in the X platform.
Although researchers reported this scam on the X platform and Grok, it could be accomplished using any AI chatbot. This scam relies on the fact that many people blindly trust the information that AI chatbots provide them. This scam proves the importance of fact-checking information provided by AI chatbots like Grok and ChatGPT and others. It also further emphasizes the importance of being cautious about advertisements run on social media platforms like X, Instagram, FaceBook, etc. Scammers often use these platforms to run fraudulent ads that ultimately trick you into clicking on a malicious link. Any generative AI chatbot that uses public information to generate responses is subject to accidentally spreading false or malicious information. Many chatbots have even been accused of having biases against certain demographics of people or political ideologies due to the methods that were used to train them. If the information used to train a bot is biased, the bot will inevitably be biased too. AI is amazing technology, but it cannot be fully trusted just yet.
Read our previous post here: Hackers Are Using Malicious Microsoft Teams Calls