In the first part of their series on generative AI for healthcare, Dong-han Yao and Shivam Vedak, both clinical informaticists and physicians at Stanford, aim to provide healthcare professionals with a foundational understanding of large language models (LLMs) and their applications in medicine. They discuss the challenges of understanding AI literature and the lack of healthcare-specific resources, emphasizing the rapid pace of AI development. The podcast outlines the three epochs of AI in healthcare: rules-based AI, machine learning, and generative AI, detailing their distinct inputs, outputs, and use cases. They further explain how LLMs work, including tokenization, static embeddings, and the transformer architecture with self-attention, and how these models are trained, including pre-training, supervised fine-tuning, and reinforcement learning with human feedback. The podcast concludes by describing the physical form of an LLM and its potential to transform productivity and problem-solving in healthcare.
Sign in to continue reading, translating and more.
Continue