Sebastian Schrittwieser: Talk at PriSec 2023

Sebastian Schrittwieser gave a talk on prompt injection attacks on Large Language Models (e.g. ChatGPT)

Sebastian Schrittwieser gave a talk on prompt injection attacks on Large Language Models such as ChatGPT at the PriSec 2023 organized by BusinessCircle. Sebastian demonstrated how prompt injections can be used to steal confidential data from the LLM's context and even exfiltrate data from internal systems such as databases.