AI-powered apps offering medical diagnoses at the click of a button are often limited by biased data and a lack of regulation, leading to inaccurate and unsafe health advice, a new study found. McGill University researchers presented symptom data from known medical cases to two popular, representative apps to see how well they diagnosed the conditions. While the apps sometimes gave correct diagnoses, they often failed to detect serious conditions, according to findings published in the Journal of Medical Internet Research .
This potentially resulted in delayed treatment. The researchers identified two main issues with the health apps they studied: biased data and a lack of regulation. Bias and the 'black box' phenomenon The bias issue is known as the "garbage in, garbage out" problem.
"These apps often learn from skewed datasets that don't accurately reflect diverse populations," said Ma'n H. Zawati, lead author and Associate Professor in McGill's Department of Medicine. Because the apps rely on data from smartphone users , they tend to exclude lower-income individuals.
Race and ethnicity are also underrepresented in the data, said the authors. This creates a cycle where an app's assessments are based on a narrower group of users, leading to more biased results and potentially inaccurate medical advice. While apps often include disclaimers stating they do not provide medical advice, the scholar argues that users' interpretations of these disclaimers—if read—do not always .