
In this episode of the Derby Mills Series, Ajay Agrawal, along with Rich Sutton, Sendhil Mullainathan, Niamh Gavin, and Suzanne Gildert, delve into the debate surrounding the bitter lesson in AI, sparked by Rich's recent podcast appearance. The discussion explores the contrasting views of reinforcement learning (RL) versus large language models (LLMs), questioning whether LLMs, which mimic human knowledge, can truly achieve understanding and scalability. The panel discusses the limitations of relying solely on human data and fine-tuning, suggesting that systems capable of learning from experience and adapting to unforeseen complexities may ultimately dominate. They also touch on the sociological and economic factors influencing AI research, the importance of mechanistic interpretability, and the potential for LLMs to be valuable despite not achieving general intelligence.
Sign in to continue reading, translating and more.
Continue