In this episode of the AI Education Podcast, we explore the latest developments and research surrounding AI in education. Notable highlights include the Australian government's proposed safety standards for AI, which address the risks associated with AI detectors and grading algorithms. We also discuss the launch of STORM, an open-source AI research tool that turned out to be less effective than expected, and a troubling incident in Victoria, Australia, where a child protection worker accidentally revealed sensitive information while using ChatGPT. The research covered in this episode delves into AI's role in lesson planning, emphasizing the importance of prompt engineering. We examine how AI's capabilities stack up against human intelligence—while it excels in certain areas, it falls short in others. Additionally, we address biases present in large language models and how students engage with AI for both intentional and incidental learning. The episode also highlights the diverse attitudes and levels of AI adoption among university staff in Australia, categorizing them as apostles, agnostics, or atheists. Throughout the discussion, we stress the necessity for transparency and human oversight in high-risk AI applications, as well as the importance of understanding both the potential benefits and challenges that AI brings to education.
Sign in to continue reading, translating and more.
Continue