This Gartner ThinkCast episode features Chris Howard discussing the challenges and advancements in AI, particularly focusing on the issue of hallucinations. Howard explains how AI hallucinations occur, detailing how these systems predict and fill in gaps with potentially incorrect information, using the example of AI generating obituaries for living analysts. He then explores methods to reduce these inaccuracies, such as constraining training data, using filters, and employing multi-agent systems that debate and refine outputs. Howard also introduces physics-informed neural networks (PINNs) as a sophisticated approach to constrain decision spaces. He emphasizes the importance of AI-ready data and understanding the problem space to effectively apply AI, suggesting that hallucinations can sometimes offer new perspectives. The episode concludes with a discussion of Gartner's efforts to address these challenges through various events and research areas.
Sign in to continue reading, translating and more.
Continue