The Ethical and Security Risks of Using Generative Artificial Intelligence in Medical Diagnostics: A Systematic Review

َAli Nourmohammadi ℗, Omid Torabi, Leila Ghanbari-Afra *

The Ethical and Security Risks of Using Generative Artificial Intelligence in Medical Diagnostics: A Systematic Review

Code: G-1949

Authors: َAli Nourmohammadi ℗, Omid Torabi, Leila Ghanbari-Afra *

Schedule: Not Scheduled!

Tag: Health Policy, Law & Management in AI

Download: Download Poster

Abstract:

Abstract

Background & Aims: The integration of generative artificial intelligence (GAI) in medical diagnostics promises increased efficiency and accuracy. However, its deployment raises substantial ethical and security concerns. This review aims to systematically assess the current literature on the ethical and cybersecurity risks associated with the application of GAI in medical diagnostic systems, with a focus on data privacy, algorithmic bias, informed consent, accountability, and system vulnerability. Method: A systematic review was conducted following PRISMA guidelines. Databases including PubMed, Scopus, IEEE Xplore, and Web of Science were searched for peer-reviewed articles published between January 2018 and February 2025. Search terms included combinations of "generative AI", "medical diagnostics", "ethics", "privacy", "bias", and "cybersecurity". A total of 1,214 studies were initially identified, with 64 studies meeting the inclusion criteria after full-text screening. Data extraction focused on reported ethical concerns, security vulnerabilities, and mitigation strategies. Results: Among the 64 included studies, the most frequently cited ethical risks were algorithmic bias (76.6%), lack of transparency (68.8%), and inadequate informed consent mechanisms (54.7%). Security risks included susceptibility to adversarial attacks (63%), data leakage during model training (58%), and misuse of synthetic medical data (45%). 34 studies (53%) reported real-world case studies or simulations where AI systems generated false or misleading diagnoses due to training data bias or adversarial manipulation. Notably, 41% of studies reported that current regulatory frameworks are insufficient to address the rapid development of GAI in clinical settings. Only 18 studies (28%) proposed concrete governance or technical mitigation strategies. Conclusion: The use of generative AI in medical diagnostics presents significant ethical and security challenges that are inadequately addressed by current systems. As GAI adoption accelerates, there is an urgent need for robust policy frameworks, transparency standards, and AI governance models tailored to the medical domain. Multi-stakeholder collaboration, including ethicists, technologists, clinicians, and policymakers, is essential to ensure that GAI is deployed responsibly in healthcare.

Keywords

Generative AI, Medical Diagnostics

Feedback

What is your opinion? Click on the stars you want.

Comments (0)

No Comment yet. Be the first!

Post a comment