Abstract: Concern over the possible dangers of creating realistic-looking but artificial images—known as "deep fakes"—has grown as deep learning techniques, especially Generative Adversarial Networks (GANs), continue to progress quickly. Deepfake medical images are a major danger to patient safety and the integrity of healthcare in the medical industry, where reliable and accurate imaging data is essential for diagnosis and treatment.

The goal of this project is to research on a novel deepfake image detecting system specifically for the medical field. We present a novel method to differentiate real medical photos from artificial ones by utilising the power of GANs, which are widely used for image synthesis. Convolutional neural networks (CNNs) and sophisticated anomaly detection methods are combined in the suggested system to efficiently recognise and flag possibly altered medical images.

The findings of this study have important ramifications for preserving the accuracy of diagnostic processes, protecting patient safety, and upholding the integrity of medical imaging datasets. Our method advances safe and reliable procedures in the medical domain by tackling the special problems presented by deepfake medical images.

 Key words: Deep learning, Machine learning, Generative Adversarial Networks (GANs), Medical Images


PDF | DOI: 10.17148/IARJSET.2024.11595

Open chat