This podcast episode delves into the risks and challenges associated with deploying machine learning systems in welfare contexts, particularly focusing on predictive risk assessments. It highlights concerns about the potential for overhyping welfare fraud, the need for transparency and accountability in the development of these systems, ethical implications, and the biases that can lead to discriminatory outcomes. The episode discusses specific case studies, such as Rotterdam's fraud detection model, to illustrate the challenges and limitations of these systems and emphasizes the need for a holistic approach to addressing potential biases and ensuring ethical AI practices.