For security and IT teams shipping agents: what Atlas got right, what it admits is unsolved, and the checklist to reduce blast radius.
Press enter or click to view image in full size
A developer hits Enter and watches an agent start browsing. Tabs flick open, a cursor blinks in a web form, and the tool pauses like it is thinking.
The coffee has gone cold, but the room feels hotter anyway.
OpenAI put the problem bluntly: “Prompt injection, much like scams and social engineering on the web, is unlikely to ever be fully ‘solved.’”
Read that again. That sentence matters.
Because once an agent can read emails and web pages and then take actions, outside text is no longer “content.” It is an input channel that can steer behavior.
OpenAI also conceded that “agent mode” in ChatGPT Atlas “expands the security threat surface.” That is a polite way of saying your demo just became a security boundary.
By the end, you will have a ship-ready checklist for browsing agents that assume injection will happen: least privilege, confirmation gates, input sanitization, and output validation.