In this episode of Practical AI, Daniel Whitenack and Chris Benson interview Donato Capitella, a principal security consultant, about the evolving landscape of AI security, particularly concerning agentic workflows and the vulnerabilities they introduce. Donato discusses the increasing adoption of agentic AI in enterprises, highlighting use cases like customer support automation, and emphasizes the critical need for robust authorization and access control to prevent exploits. The conversation covers the risks associated with integrating multiple data sources into LLM contexts, potential real-world attacks, and the importance of shifting the cybersecurity mindset towards system design that accounts for the inherent limitations in solving prompt injection and jailbreaking. Donato also introduces Spikee, an open-source tool designed to evaluate and exploit vulnerabilities in LLM applications, and shares insights on design patterns for securing LLM agents against prompt injection.
Sign in to continue reading, translating and more.
Continue