
In this interview, Dwarkesh explores the differences between the human brain and LLMs with Adam Marblestone. They discuss the importance of loss functions, omnidirectional inference, and the brain's steering subsystem. Marblestone introduces Stephen Burns' theories on how the brain encodes high-level desires and connects them to primitive rewards. The conversation covers amortized inference, the limitations of the genome, and the potential of multi-agent scaling. They also touch on the role of hardware, the challenges of continual learning, and the potential of provable programming languages. Marblestone emphasizes the need for more neuroscience research to understand the brain's algorithms and architectures.
Sign in to continue reading, translating and more.
Continue