
AI company leaders are shifting their public rhetoric, moving away from apocalyptic warnings about their products destroying the economy or humanity. This pivot follows intense pressure from Wall Street, declining public sentiment, and growing skepticism from journalists regarding the inevitability of an AI-driven job apocalypse. The initial tendency to frame AI as an existential threat originated from the "rationalist" and "X-risk" subcultures prevalent in San Francisco, where engineers obsessed over superintelligence and expected value calculations. For these leaders, adopting such dire language was less a strategic business move and more a reflection of their insular cultural environment. As these companies transition toward massive IPOs and broader market integration, they are increasingly forced to abandon this mission-driven, cultish discourse in favor of more conventional, responsible corporate communication.
Sign in to continue reading, translating and more.
Continue