Governing the AI value chain requires a multi-stakeholder approach that accounts for the distinct responsibilities of model developers, downstream modifiers, and application providers. The recent CrowdStrike software update, which caused $5.4 billion in damages, illustrates the systemic risks inherent in complex technology supply chains. While current regulatory frameworks like the EU AI Act attempt to distribute obligations across this chain, legal challenges remain regarding liability and the enforcement of civil rights protections. Effective governance must move beyond existential "doomsday" scenarios to address concrete, present-day harms, such as algorithmic discrimination in hiring and healthcare. Furthermore, policymakers must balance the need for transparency and safety with the realities of open-source development, ensuring that regulatory burdens do not stifle innovation or disproportionately impact small and medium-sized enterprises.
Sign in to continue reading, translating and more.
Continue