On Effectiveness of the Adversarial Attacks on the Computer Systems of Biomedical Images Classification

The problems of vulnerability of the computer systems of biomedical images classification to adversarial attacks on are investigated. The aim of the work is to study the effectiveness of the impact of various models of adversarial attacks on biomedical images and the values of control parameters of algorithms for generating their attacking versions. The effectiveness of attacks prepared using the projected gradient descent algorithm (PGD), Deep Fool (DF) algorithm and Carlini-Wagner algorithm (CW) is investigated. Experimental studies were carried out on the example of solving typical problems of medical images classification using deep neural networks VGG16, EfficientNetB2, DenseNet121, Xception, ResNet50 as well as data containing chest X-rays images and brain MRI-scan images. Our findings in this work are as follows. Deep models were very susceptible to adversarial attacks, which led to decrease of the accuracy classification of the models for all datasets. Prior to the use of adversarial methods, we achieved a classification accuracy of 93.6% for brain MRI and 99.1% for chest X-rays. During the DF attack the accuracy of the VGG16 model showed a maximum absolute decrease of 49.8% for MRI-scans, and 57.3% for chest X-rays images. The gradient descent (PGD) algorithm with the same values of malicious image disturbances is less effective than the DF and the CW adversarial attacks. VGG16 deep model is more effective in accuracy classification on considered datasets and most vulnerable to adversarial attacks among other deep models. We hope that these results would be useful to design more robust and secure medical deep learning systems.

Авторы
Shchetinin E.Y.1 , Glushkova A.G.2 , Blinkov Y.A. 3, 4
Издательство
Springer Verlag
Язык
Английский
Страницы
91-103
Статус
Опубликовано
Год
2023
Организации
  • 1 Financial University under the Government of the Russian Federation
  • 2 Oxford University
  • 3 Peoples Friendship University of Russia (RUDN)
  • 4 Saratov State University
Ключевые слова
adversarial attacks; deep learning; white-box attacks; black-box attack; Chest X-ray Images; brain tumor MRI-scans
Дата создания
28.12.2023
Дата изменения
28.12.2023
Постоянная ссылка
https://repository.rudn.ru/ru/records/article/record/102351/
Поделиться

Другие записи

Хугаева Ф.С., Зикиряходжаев А.Д., Рассказова Е.А., Волченко Н.Н., Решетов И.В.
Врач. Общество с ограниченной ответственностью Издательский дом Русский врач. Том 34. 2023. С. 67-73
Зуенкова Ю.А.
Медицина и организация здравоохранения. Фонд НОИ "Здоровые дети-будущее страны", Санкт-Петербургский государственный педиатрический медицинский университет. Том 8. 2023. С. 32-42