Compromising LLMs: The Advent of AI Malware
Black Hat Black Hat
223K subscribers
6,020 views
0

 Published On Jan 29, 2024

We'll show that prompt injections are more than a novelty or nuisance- in fact, a whole new generation of malware and manipulation can now run entirely inside of large language models like ChatGPT. As companies race to integrate them with applications of all kinds we will highlight the need to think thoroughly about the security of these new systems. You'll find out how your personal assistant of the future might be compromised and what consequences could ensue.

By: Sahar Abdelnabi , Christoph Endres , Mario Fritz , Kai Greshake , Shailesh Mishra

Full Abstract and Presentation Materials: https://www.blackhat.com/us-23/briefi...

show more

Share/Embed