
The podcast is an educational session focusing on agent memory and its importance in improving the quality of AI agents and RAG pipelines. The speaker discusses the limitations of relying solely on large language models (LLMs) for AI systems, emphasizing that context is not memory and that proper memory engineering is crucial for reliable, long-horizon agent behavior. The session covers the anatomy of an agent, different application modes (assistant, workflow, and deep research), context engineering techniques, and the hierarchy of memory types (short-term, long-term, and shared memory). The speaker also touches on evaluating memory systems and common challenges, advocating for a shift towards designing systems that accumulate experience for more intelligent agents.
Sign in to continue reading, translating and more.
Continue