Still Putting Its Pants On—Catching and Catching Up with AI Confabulation in Health Science
Presenter: James Lawler, MD, MPH
Synopsis:
I recently asked ChatGPT to search the literature for epidemiology-related papers about physicians under-prescribing COVID-19 tests for children. In a fraction of a second, ChatGPT returned seven highly relevant citations, complete with authors, titles, journals, months, and years. There was only one problem: none of those citations was real. Winston Churchill supposedly said, “A lie gets halfway around the world before the truth has a chance to get its pants on.” My example above reflects the emerging but widely observed problems with large language models’ propensity to provide incorrect or confabulated data cloaked in believable prose, compounding the already daunting problem of human-generated and propagated misinformation/disinformation. In this talk, I will discuss how accidental AI-generated misinformation may impact health sciences practice, education, and research.
Synopsis:
I recently asked ChatGPT to search the literature for epidemiology-related papers about physicians under-prescribing COVID-19 tests for children. In a fraction of a second, ChatGPT returned seven highly relevant citations, complete with authors, titles, journals, months, and years. There was only one problem: none of those citations was real. Winston Churchill supposedly said, “A lie gets halfway around the world before the truth has a chance to get its pants on.” My example above reflects the emerging but widely observed problems with large language models’ propensity to provide incorrect or confabulated data cloaked in believable prose, compounding the already daunting problem of human-generated and propagated misinformation/disinformation. In this talk, I will discuss how accidental AI-generated misinformation may impact health sciences practice, education, and research.