The podcast explores the capabilities and limitations of large language models (LLMs) with guest Gary Marcus, a long-time AI researcher and critic of LLMs. Marcus argues that LLMs, which fundamentally predict the next item in a sequence, are glorified memorization machines prone to "hallucinations" or making up information. He contends that the AI community's focus on scaling LLMs is misguided, as these models lack the abstract reasoning and understanding of the world necessary for true artificial general intelligence. Marcus points out that LLMs struggle with novelty and can undermine institutions by spreading incorrect information. He advocates for intellectual diversity in AI research, including the integration of classical symbolic AI and the development of "world models" that represent real-world knowledge and relationships.
Sign in to continue reading, translating and more.
Continue