This is the fifth iteration of the CS25 Transformers class, focusing on Transformers and AI. The class invites leading researchers to speak on state-of-the-art topics. Instructors Steven, Curran, Chelsea, and Jenny introduce themselves and their research interests. The course logistics are explained, including the new website and the Zoom link for non-affiliated individuals. The lecture covers the basics of Transformers, pre-training data strategies, post-training, and applications across language, vision, biology, and robotics. Topics include word embeddings, self-attention, positional encodings, chain of thought reasoning, reinforcement learning with human feedback, and self-improving AI agents. Vision Transformers and their applications in neuroscience are also discussed. The lecture concludes with a discussion on the future of transformer models, including challenges like computational complexity, human controllability, and the need for continual learning.