
The prestigious NeurIPS machine learning conference accepted over 50 research papers containing more than 100 hallucinated AI citations, highlighting a growing integrity crisis in academic publishing. Alex Tui, CTO of GPTZero, explains that researchers under intense pressure to publish—where success can lead to $100 million in funding or elite job offers—are using AI to draft sections of their papers, resulting in the fabrication of non-existent publications and "zombie" authors like "FirstName LastName." These errors often go undetected because the sheer scale of modern data makes human verification nearly impossible for reviewers. Beyond simple inaccuracy, these hallucinations exhibit systemic biases, such as generating nonsensical strings of Chinese initials when fabricating non-Anglophone references. While conference organizers maintain these errors do not necessarily invalidate the core research, the trend raises fundamental questions about the reliability of human oversight in an increasingly automated scientific landscape.
Sign in to continue reading, translating and more.
Continue