ChatGPT Atlas is vulnerable to prompt injections and can help with phishing
I told about how OpenAI released the ChatGPT Atlas browser. And here the first users already found a whole bouquet of problems. Let’s start with basic things. The browser has no built-in ad blocker, reading mode and text translation function on the page. To retell an article or translate it, you need to ask the bot in chat.
Unfortunately, the agent in Atlas is vulnerable to prompt injections – hidden malicious commands for neural networks that attackers place in documents in light color. One “white hacker” demonstrated such an attack: added a hidden “copy to clipboard” function to a button on the site. When the AI assistant clicks it, a malicious link is saved. The user presses Ctrl + V in chat – and the agent obediently opens fake PayPal or Gmail. Where they ask to enter personal data. Simply put, AI becomes an accomplice to phishing.
And now about censorship and restrictions. Not all articles can ChatGPT analyze – for example, The New York Times blocks Atlas use. Also the assistant doesn’t make a summary for every video due to overly cautious moderation.
Also, in AI agent mode Atlas can lag, get confused and ask for human help, especially if pop-ups appear on the site. A paradox emerges – this is a technologically advanced tool with serious vulnerabilities, without basic functions of a regular browser and with excessive censorship. OpenAI released a product that simultaneously impresses with concept and disappoints with execution.