In this interview, John Schulman discusses the early days of OpenAI, including the "ragtag" nature of the team and a failed project called Universe. He also touches on the ideal research manager, drawing inspiration for OpenAI from Google Brain and DeepMind, and contrasts research environments like early OpenAI and Thinking Machines. Schulman explores the role of value functions in RL, continual learning, and brittle generalization. He also shares his thoughts on co-training models, his personal AI usage, and his research process. Finally, he discusses the evolution of skills needed for effective research, the rate of consequential idea generation, coordination among AI labs, AGI timelines, and Thinking Machines' Tynker and future plans.
Sign in to continue reading, translating and more.
Continue