This co-hosted podcast episode of Practical AI focuses on DeepSeek R1, a large language model (LLM) from a Chinese startup. The hosts discuss the model's surprisingly low training cost (around $5-6 million), its performance comparable to OpenAI's models, and the ensuing debate about its implications. They analyze the narratives surrounding DeepSeek, addressing concerns about data security and potential biases stemming from its training data and Chinese origin. The hosts also delve into the model's technical aspects, including its architecture (similar to LLaMA but with mixture-of-experts layers) and the availability of different versions on Hugging Face, ranging from smaller, laptop-runnable models to larger ones requiring significant computing power. The discussion concludes by predicting the impact of DeepSeek on the AI community, suggesting a shift towards model optionality and increased focus on data security and robust model deployment in enterprise settings.