New attack on ChatGPT research agent pilfers secrets from Gmail inboxes
ShadowLeak starts where most attacks on LLMs do—with an indirect prompt injection. These prompts are tucked inside content such as documents and emails sent by untrusted people. They contain instructions to perform actions the user never asked for, and like a Jedi mind trick, they are tremendously effective in persuading the LLM to do things… Read More »