A new study shows radiologists struggle to spot AI-generated X-rays, raising alarms about medical deepfakes. Learn about the risks and future of imaging security.
A recent study published in Radiology, the journal of the Radiological Society of North America (RSNA), has raised alarms. It reveals that even experienced radiologists are finding it difficult to distinguish genuine X-rays from those created by artificial intelligence. The research shows how realistic AI-generated medical images have become. These images are designed to look completely authentic, which brings up major concerns about the reliability of medical imaging.

Study Setup
The study involved 17 radiologists from 12 medical institutions in six countries, including the UK, US, and Germany. Their level of experience varied, ranging from early-career doctors to specialists with many years of practice.
They were presented with 264 X-ray images, half of which were real and half were AI-generated. Some of the fake images were created using tools like ChatGPT, while others came from RoentGen, an open-source AI model developed by Stanford Medicine. The participants evaluated two separate sets of images without any overlap to ensure a fair and unbiased test.
Also read: World's Smallest QR Code Could Store Data for Centuries
Accuracy Results
The results show that detecting fake X-rays is not a simple task. When radiologists were unaware that some images were AI-generated, only 41 percent correctly identified the fake ones. However, when they were told which images were fake, their accuracy rose to about 75 percent.
The performance varied widely between individuals, with some doing much better than others. Interestingly, radiologists specializing in musculoskeletal imaging tended to perform better than those in other areas.
AI systems were also tested, including models like GPT-4o, GPT-5, Gemini 2.5 Pro, and Llama 4 Maverick. These systems had mixed success, with accuracy levels ranging from just over 50 percent to nearly 90 percent. Even the AI models that created some of the fake images were not reliably able to detect all of them.
Visual Signs
The research team points out that certain patterns may indicate an AI-generated image. These include unusually perfect visuals, such as overly smooth bones, highly symmetrical lungs, and unnaturally neat fractures. However, these signs are often very subtle and not always easy to spot, even for experienced professionals.
Potential Risks
The study highlights serious potential dangers if this technology is misused. Fake X-rays could be used in legal cases to support false claims or even inserted into hospital systems by hackers, leading to incorrect diagnoses and disrupted care. This poses challenges both in medicine and cybersecurity.
Future Steps
To tackle these risks, researchers suggest implementing stronger safeguards, such as invisible digital watermarks and secure verification systems connected to imaging equipment. They also emphasize the need for better training for healthcare professionals to recognize deepfakes.
The research team has shared a dedicated dataset along with interactive tools to help with education. As AI technology keeps advancing, experts warn that more sophisticated fake images, including 3D scans like CT and MRI, may soon become a reality, making early preparation crucial.
Also read: Brain Chip Breakthrough: ALS Patient Communicates Without Speaking

