In this episode of the Practical AI podcast, Daniel Whitenack and Chris Benson interview Patrick Foley, lead AI architect at Intel, about federated learning. Foley explains federated learning as a technique where models are sent to decentralized data sources for training, addressing privacy and data size concerns. He details the training process, highlighting the role of an aggregator and collaborators, and emphasizes the importance of data vetting and secure communication. The discussion covers real-world applications like brain tumor segmentation using OpenFL, the types of models suitable for federated learning (including neural networks and generative AI models), and the challenges of data heterogeneity and potential data leakage. The conversation also explores the future of federated learning, focusing on interoperability, governance, and the increasing importance of privacy and intellectual property protection in collaborative AI environments.