In this a16z AI podcast, the hosts revisit a 2023 discussion on who owns generative AI platforms, updating their insights to reflect recent developments in the field. They highlight an intriguing trend where open-source and closed-source large language models (LLMs) are starting to converge, mainly due to the abundance of massive public datasets. The conversation breaks down the generative AI landscape into three key layers: infrastructure, which is currently led by NVIDIA but seeing competition from Google's TPUs; models, many of which have reached commercial viability, fueled by accessible tools like ChatGPT; and applications, where success hinges on creating unique workflows using these existing models. The podcast also looks ahead to the future of multimodal models and the exciting potential for innovation in areas such as music generation and education. One important takeaway is how the relationship between training data and model parameters has changed dramatically, making former scaling rules less relevant.