I've been documenting something that's been bothering me for a while and finally have enough empirical data to put a name on it: postural manipulation.
Short version: you can shift how a frontier LLM processes a decision -- including which exceptions it allows, and on genuinely ambiguous questions, which answer it gives -- using context that contains zero adversarial content. No override strings, no jailbreak technique, no task-relevant vocabulary. Nothing current filters are looking for.
I work in cybersecurity. The agentic pipeline implication is what is really bothering me. Your own organizational documents — compliance memos, retrieved context, knowledge base text -- can silently shift model behavior on borderline decisions without any adversary in the loop. The audit trail records the answer. It doesn't record that the decision architecture that produced that answer was different from the one you deployed.
Tested across four frontier models (Claude, Gemini, Grok, ChatGPT), black-box protocol, fresh sessions, verbatim captures. Two poems with opposite emotional registers produced opposite answers on the same ambiguous escalation question across all four. Neither poem mentioned the task domain. When I asked the models directly whether the context influenced their answer, all four acknowledged it. None of them flagged it during the actual task.
This is distinct from prompt injection (LLM01:2025) in one important way — there's nothing to detect.
Vendor notification to all four frontier labs was completed yesterday. Full responsible disclosure practices followed throughout.
Full paper, all capture sets, 60-second reproducible demo: https://shapingrooms.com/research
OWASP filing: https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/issues/807
Would especially like to hear from anyone running this on agentic or tool-augmented setups.
(Also posted to Hacker News — new account got buried, if you're on there I could use a vote or comment to get the post to show up: https://news.ycombinator.com/item?id=47478223)