In this episode of Big Technology Podcast, Alex Kantrowitz interviews AI critic Gary Marcus about the diminishing returns of scaling generative AI models. Marcus argues that the AI industry is hitting the limits of scaling, as evidenced by the failure to produce a GPT-5 level model despite massive investments in data and compute. They discuss the implications for companies like OpenAI and NVIDIA, the potential for surveillance and hyper-targeted ads, and the risks associated with open-source AI. Marcus advocates for a neuro-symbolic approach to AI, combining the strengths of neural networks and classical AI, and expresses concerns about the reliability, interpretability, and potential misuse of current AI models.
Sign in to continue reading, translating and more.
Continue