featured-image

When human radiologists examine scans, they peer through the lens of decades of training. Extending from college to medical school to residency, the road that concludes in a physician interpreting, say, an X-ray, includes thousands upon thousands of hours of education, both academic and practical, from studying for licensing exams to spending years as a resident. At present, the training pathway for artificial intelligence (AI) to interpret medical images is much more straightforward: show the AI medical images labeled with features of interest, like cancerous lesions, in large enough quantities for the system to identify patterns that allow it to "see" those features in unlabeled images.

Despite more than 14,000 academic papers having been published on AI and radiology in the last decade, the results are middling at best. In 2018, researchers at Stanford realized that an AI they trained to identify skin lesions erroneously flagged images that contained rulers because most of the images of malignant lesions also had rulers in them. "Neural networks easily overfit on spurious correlations," says Mark Yatskar, Assistant Professor in Computer and Information Science (CIS), referring to the AI architecture that emulates biological neurons and powers tools as varied as ChatGPT and image-recognition software.



"Instead of how a human makes the decisions, it will take shortcuts." In a new paper, to be shared at NeurIPS 2024 as a spotlight, Yatskar, together with Chris Callison-Burch,.

Back to Health Page