
The podcast features a talk on compositional generalization in minds and machines, discussing how humans and AI models understand and produce novel combinations from known components. The talk is divided into three parts: whether modern AI models show systematic compositionality, how people make compositional generalizations, and a neural network that learns to make compositional generalizations. The speaker introduces the SCAN challenge to test compositional learning in AI models, highlighting the difficulties AI faces in generalizing new primitives and longer action sequences. The speaker contrasts this with human abilities, presenting experiments that reveal inductive biases people use when learning new concepts. A neural network model, meta-seq-to-seq learning, is introduced to improve compositional generalization by meta-learning with a structured memory augmented neural network. The podcast concludes with a Q&A session, addressing questions about human experiments, inductive bias, and neural symbolic techniques.
Sign in to continue reading, translating and more.
Continue