The podcast explores categorical deep learning as a potential unifying framework for neural networks, addressing the current lack of theoretical foundations in deep learning architectures. It highlights how category theory can bridge the gap between constraints and implementation in deep learning, offering a universal algebra within a two-category of parametric maps. Geometric deep learning is presented as a special case within this broader framework. The discussion covers the use of two-morphisms to model weight tying and the application of categorical principles to recursion and non-invertible computation. Andrew Dudzik explains category theory using the analogy of algebra with colors, while Petar Velichkovich discusses geometric deep learning and its limitations regarding non-invertible computations. The conversation also touches on the potential of building CPUs within neural networks by exploiting geometric subtleties like the Hopf vibration.
Sign in to continue reading, translating and more.
Continue