Hacking Chat GPT

Data:

Luogo:

Hörsaal SLF & Zoom

Organizzato da:

SLF

Relatore/relatrice:

Matthias Gerber

Lingua:

English

Tipo di evento:

Presentazioni e colloqui

Pubblico principale:

Everybody interested in this topic

ChatGPT and other large language models get integrated in our daily work more and more. They support us in writing code and emails, searching the internet and many other things.  In the future they might even operate our kitchen, producing our dinner, based on a prompt we sent them from work. To do so, they interact with application and infrastructure we are using. Hereby the separation between data and code is blurring. Prompts to LLM-Integrated-Applications can trigger actions. Great! But what if it is not the user prompting ....

Based on a paper* by researchers from the CISPA Helmholtz Center for Information Security, the Saarland University and others, possible attack vectors for LLM-Integrated-Applications will be explained.

The talk will be in English.

* ”Not what you’ve signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection”.

Zoom-Link: https://wsl.zoom.us/j/68443014482?pwd=ck1jZ1lDUlhTd2prS1ovU29zcFZMZz09

×