In this episode of Unsupervised Learning, Simon Eskildsen, CEO of TurboPuffer, discusses the evolution of vector databases and their role in AI applications. He explains how TurboPuffer addresses the challenges of connecting large amounts of data to LLMs, emphasizing the importance of scale, cost, recall, ACLs, and performance (SCRAP). Simon details TurboPuffer's object-oriented architecture, its trade-offs, and its suitability for search-intensive workloads. He also shares insights on common use cases, such as code search in Cursor, Q&A features in Notion, and similarity searches in Linear, and touches on the future of AI infrastructure, focusing on simplicity, reliability, and the importance of a new storage architecture.
Sign in to continue reading, translating and more.
Continue