How I Exploited a Chatbot to Execute a Hidden Command via Product Reviews (PortSwigger Lab Walkthrough)
Press enter or click to view image in full size
🧠 Indirect Prompt Injection — Deleting Carlos (PortSwigger Lab Walkthrough)
🚀 Introduction
Modern applications are increasingly integrating Large Language Models (LLMs) with backend APIs. While powerful, this creates a new attack surface — where user-controlled content can influence AI behavior.
In this lab, we exploit an Indirect Prompt Injection vulnerability to delete another user (Carlos) by injecting hidden instructions into product reviews.
Friend’s link (non-member access)
⚡ TL;DR
Inject malicious prompt → AI reads it → Executes API → Victim account deleted
🧩 Understanding the Vulnerability
The chatbot (Arti Ficial) has access to sensitive APIs:
delete_accountedit_emailpassword_resetproduct_info