This Freakonomics Radio interview focuses on the challenges artists face due to AI art generation tools that utilize their work without permission. The podcast features computer scientist Ben Zhao, who discusses his research into adversarial machine learning and the tools he's developed, Glaze and Nightshade, to mitigate the harms of AI models trained on scraped internet data. Zhao explains how these tools "poison" AI training data, subtly altering images to disrupt the AI's ability to accurately learn and reproduce artistic styles. He argues that this approach aims to increase the cost of training AI models, incentivizing companies to license artwork legitimately. The interview also touches upon the broader ethical and economic implications of AI's rapid advancement and the power dynamics between large tech companies and individual artists.