
In this episode of the GZERO World podcast, Ian Bremmer interviews Tristan Harris, a former Google ethicist and co-founder of the Center for Humane Technology, about the ethical implications and societal risks of artificial intelligence. Harris argues that AI is unlike any technology humanity has created, comparing it to birthing an intelligent species, and emphasizes that the incentive to reach artificial general intelligence overshadows ethical considerations. They discuss the differences between the approaches to AI in the West, where companies are focused on creating superintelligence, and in China, where the priority is on deploying AI for industrial and economic productivity. Harris raises concerns about the broad deployment of AI to society, which leads to issues like AI psychosis and teens' suicides, and advocates for a more cautious approach, focusing on specific applications and implementing AI liability laws. They also touch on the role of social media-like engagement incentives, the lack of testing for psychological impacts, and the need for government involvement and international agreements to manage the risks of AI.
Sign in to continue reading, translating and more.
Continue