The recap of the NeurIPS 2024 Latent Space Live mini-conference highlights the progress made in open-source large language models (LLMs) this year. Attendees noted a remarkable surge in both the quantity and capabilities of these models, fueled by community collaboration and the introduction of thorough "fully open" model recipes. Yet, challenges persist, such as increasing computational demands that hinder research access, restricted availability of open training data due to more websites blocking access, and ongoing discussions about the regulation and potential risks associated with open-source AI. Speakers emphasized the importance of establishing better incentives to promote continued development and transparency in the field.