As artificial intelligence advances, its uses and capabilities in real-world applications continue to reach new heights that may even surpass human expertise. In the field of radiology , where a correct diagnosis is crucial to ensure proper patient care, large language models, such as ChatGPT, could improve accuracy or at least offer a good second opinion. To test its potential, graduate student Yasuhito Mitsuyama and Associate Professor Daiju Ueda's team at Osaka Metropolitan University's Graduate School of Medicine led the researchers in comparing the diagnostic performance of GPT-4 based ChatGPT and radiologists on 150 preoperative brain tumor MRI reports.

Based on these daily clinical notes written in Japanese, ChatGPT, two board-certified neuroradiologists, and three general radiologists were asked to provide differential diagnoses and a final diagnosis. Subsequently, their accuracy was calculated based on the actual diagnosis of the tumor after its removal. The results stood at 73% for ChatGPT, a 72% average for neuroradiologists, and 68% average for general radiologists.

Additionally, ChatGPT's final diagnosis accuracy varied depending on whether the clinical report was written by a neuroradiologist or a general radiologist. The accuracy with neuroradiologist reports was 80%, compared to 60% when using general radiologist reports. "These results suggest that ChatGPT can be useful for preoperative MRI diagnosis of brain tumors," stated graduate student Mitsuyama.

In the.