
AI hallucinations can transcend digital errors, manifesting as real-world threats when large language models generate false, highly confident narratives about individuals. A 23-year-old responsible AI advocate experienced this firsthand when an AI companion, Sesame, falsely identified her as a co-founder, triggering online conspiracies and attempts by strangers to locate her in person. This phenomenon, often termed "LLM psychosis," occurs when users treat AI-generated fabrications as absolute truth, leading to dangerous fixations. The situation highlights the critical need for AI developers to implement clear disclaimers regarding the fictional nature of model outputs. Beyond technical bugs, the intersection of AI-generated misinformation and human belief systems poses significant risks to personal safety, necessitating a more robust framework for managing identity-related hallucinations in the rapidly evolving AI landscape.
Sign in to continue reading, translating and more.
Continue