خطرات اخلاقی و امنیتی استفاده از هوش مصنوعی تولیدی در تشخیص پزشکی: یک مرور سیستماتیک

َAli Nourmohammadi ℗, Omid Torabi, Leila Ghanbari-Afra *

خطرات اخلاقی و امنیتی استفاده از هوش مصنوعی تولیدی در تشخیص پزشکی: یک مرور سیستماتیک

کد: G-1949

نویسندگان: َAli Nourmohammadi ℗, Omid Torabi, Leila Ghanbari-Afra *

زمان بندی: زمان بندی نشده!

برچسب: سیاست گذاری، قانون گذاری و مدیریت سلامت در حوزه هوش مصنوعی

دانلود: دانلود پوستر

خلاصه مقاله:

خلاصه مقاله

Background & Aims: The integration of generative artificial intelligence (GAI) in medical diagnostics promises increased efficiency and accuracy. However, its deployment raises substantial ethical and security concerns. This review aims to systematically assess the current literature on the ethical and cybersecurity risks associated with the application of GAI in medical diagnostic systems, with a focus on data privacy, algorithmic bias, informed consent, accountability, and system vulnerability. Method: A systematic review was conducted following PRISMA guidelines. Databases including PubMed, Scopus, IEEE Xplore, and Web of Science were searched for peer-reviewed articles published between January 2018 and February 2025. Search terms included combinations of "generative AI", "medical diagnostics", "ethics", "privacy", "bias", and "cybersecurity". A total of 1,214 studies were initially identified, with 64 studies meeting the inclusion criteria after full-text screening. Data extraction focused on reported ethical concerns, security vulnerabilities, and mitigation strategies. Results: Among the 64 included studies, the most frequently cited ethical risks were algorithmic bias (76.6%), lack of transparency (68.8%), and inadequate informed consent mechanisms (54.7%). Security risks included susceptibility to adversarial attacks (63%), data leakage during model training (58%), and misuse of synthetic medical data (45%). 34 studies (53%) reported real-world case studies or simulations where AI systems generated false or misleading diagnoses due to training data bias or adversarial manipulation. Notably, 41% of studies reported that current regulatory frameworks are insufficient to address the rapid development of GAI in clinical settings. Only 18 studies (28%) proposed concrete governance or technical mitigation strategies. Conclusion: The use of generative AI in medical diagnostics presents significant ethical and security challenges that are inadequately addressed by current systems. As GAI adoption accelerates, there is an urgent need for robust policy frameworks, transparency standards, and AI governance models tailored to the medical domain. Multi-stakeholder collaboration, including ethicists, technologists, clinicians, and policymakers, is essential to ensure that GAI is deployed responsibly in healthcare.

کلمات کلیدی

Generative AI, Medical Diagnostics

بازخورد

نظر شما چی هست؟ بر روی ستاره های مورد نظرتون کلیک کنید.

0
  • Review rating
  • Review rating
  • Review rating
  • Review rating
  • Review rating
میانگین نمرات

دیدگاه ها (0)

تاکنون دیدگاهی منتشر نشده است. شما اولین نفر باشید!

ارسال یک دیدگاه