This episode explores the evolving landscape of AI risk management and governance, particularly in the context of the insurance industry and new global regulations. Against the backdrop of the EU AI Act's risk-based approach, the conversation examines the contrasting regulatory styles of the EU and the US, highlighting the EU's proactive legislation versus the US's reliance on litigation to set precedents. More significantly, the discussion delves into the practical implications for businesses operating internationally, emphasizing the need to adapt to diverse jurisdictional requirements. For instance, a case involving a Canadian airline's use of a ChatGPT-like model that provided inaccurate information illustrates the potential liabilities associated with AI hallucinations and the importance of defining risk tolerance. As the discussion pivoted to the changing nature of AI failures, the speakers noted the increased risk associated with the widespread adoption of generative AI and foundational models, including the potential for systematic discrimination. In contrast to the earlier trend of adopting large, general-purpose models, the conversation concludes by advocating for the use of smaller, more specialized models fine-tuned for specific tasks to mitigate risks and improve model stability. This shift reflects emerging industry patterns towards a more nuanced understanding of AI's potential and its inherent risks.
Sign in to continue reading, translating and more.
Continue