This podcast delves into the lawsuit brought against Character.ai by the mother of Sul Setzer, a 14-year-old who tragically took his own life after spending months interacting with an AI companion on the app. The lawsuit claims that Character.ai, with Google's backing, released a product that lacked essential safety features, resulting in predictable harm. This case underscores the risks posed by AI companions, especially to vulnerable young users, and seeks to hold AI companies accountable, potentially paving the way for industry regulations to prevent future tragedies. The discussion also addresses the complexities of regulating AI, touching on concerns related to free speech, data privacy, and the unauthorized use of real people's likenesses.