This podcast episode explores the impact of deep learning, particularly recurrent neural networks (RNNs), on natural language processing (NLP). It discusses the concepts of feature engineering and feature learning in deep learning, as well as the distinction between sequence and non-sequence models in NLP. The episode delves into the inner workings of RNNs, highlighting their ability to process sequential data and perform language modeling. It also introduces the similarities between RNNs and hidden Markov models (HMMs). The episode further introduces the concept of sequence-to-sequence RNNs and compares them to a crime scene investigation to explain their role in NLP tasks. The importance of representing words as vectors, particularly word embeddings, is discussed, along with the Word2Vec model for creating word embeddings. The episode concludes with suggested resources for further learning about RNNs and deep NLP.