This episode explores the capabilities and limitations of current large language models (LLMs), particularly their ability to make connections across disparate fields of knowledge, and the implications for career choices in the age of rapidly advancing artificial general intelligence (AGI). The discussion begins with a question about the relevance of the host's new book, "The Scaling Era," which compiles insights from interviews with AI researchers, CEOs, and scholars across various disciplines. The book aims to address fundamental questions about intelligence, the economic impact of AI, and the potential for superintelligence. Against this backdrop, the conversation pivots to the combinatorial attention challenge in LLMs—their inability to make novel connections between seemingly unrelated pieces of information, despite possessing vast knowledge. Researchers suggest that the current pre-training objectives don't incentivize this type of creative connection-making, and that reinforcement learning (RL) methods might be necessary. More significantly, the discussion highlights the analogy between LLMs and individuals with exceptional memorization abilities but limited capacity for generalization or social interaction, suggesting that current models may be "idiot savants." For instance, the case of Kim Peek, who possessed an encyclopedic memory but lacked social skills, is used to illustrate this point. As the discussion broadens, the hosts offer career advice for a college-bound 17-year-old, emphasizing the importance of deep technical knowledge and the ability to manage AI-powered teams in the near future. What this means for aspiring professionals is a focus on acquiring fundamental knowledge and mental models while leveraging AI tools to learn efficiently and identify areas where AI excels or falls short.