Artificial intelligence (AI) has practically limitless applications in healthcare, ranging from auto-drafting patient messages in MyChart to optimizing organ transplantation and improving tumor removal accuracy. Despite their potential benefit to doctors and patients alike, these tools have been met with skepticism because of patient privacy concerns, the possibility of bias, and device accuracy. In response to the rapidly evolving use and approval of AI medical devices in healthcare, a multi-institutional team of researchers at the UNC School of Medicine, Duke University, Ally Bank, Oxford University, Colombia University, and University of Miami have been on a mission to build public trust and evaluate how exactly AI and algorithmic technologies are being approved for use in patient care.

Together, Sammy Chouffani El Fassi, a MD candidate at the UNC School of Medicine and research scholar at Duke Heart Center, and Gail E. Henderson, PhD, professor at the UNC Department of Social Medicine, led a thorough analysis of clinical validation data for 500+ medical AI devices, revealing that approximately half of the tools authorized by the U.S.

Food and Drug Administration (FDA) lacked reported clinical validation data. Their findings were published in Nature Medicine . Although AI device manufacturers boast of the credibility of their technology with FDA authorization, clearance does not mean that the devices have been properly evaluated for clinical effectiveness using real patien.