In this episode of Efficiency Unlocked, hosts Drs. Adam Carewe and Dale Gold interview Ruben Amarasingham, CEO and co-founder of Pieces, about a paper Pieces published on a system to classify, detect, and prevent hallucinatory errors in clinical summarization. Ruben discusses Pieces' classification framework for identifying and mitigating hallucinations in AI-generated clinical summaries, emphasizing the importance of transparency and ongoing monitoring. He explains how Pieces developed a risk severity system and a software platform called SafeRead to measure and address hallucinations, highlighting the use of an adversarial AI system that learns from human reviews to improve accuracy. The conversation also touches on the Texas Attorney General's interest in AI accuracy, the potential for SafeRead to be used by other organizations, and future applications of AI in patient communication and healthcare.
Sign in to continue reading, translating and more.
Continue