Navigating LLM Threats: Detecting Prompt Injections and Jailbreaks | DeepLearningAI | Podwise