The problems of vulnerability of the computer systems of biomedical images classification to adversarial attacks on are investigated. The aim of the work is to study the effectiveness of the impact of various models of adversarial attacks on biomedical images and the values of control parameters of algorithms for generating their attacking versions. The effectiveness of attacks prepared using the projected gradient descent algorithm (PGD), Deep Fool (DF) algorithm and Carlini-Wagner algorithm (CW) is investigated. Experimental studies were carried out on the example of solving typical problems of medical images classification using deep neural networks VGG16, EfficientNetB2, DenseNet121, Xception, ResNet50 as well as data containing chest X-rays images and brain MRI-scan images. Our findings in this work are as follows. Deep models were very susceptible to adversarial attacks, which led to decrease of the accuracy classification of the models for all datasets. Prior to the use of adversarial methods, we achieved a classification accuracy of 93.6% for brain MRI and 99.1% for chest X-rays. During the DF attack the accuracy of the VGG16 model showed a maximum absolute decrease of 49.8% for MRI-scans, and 57.3% for chest X-rays images. The gradient descent (PGD) algorithm with the same values of malicious image disturbances is less effective than the DF and the CW adversarial attacks. VGG16 deep model is more effective in accuracy classification on considered datasets and most vulnerable to adversarial attacks among other deep models. We hope that these results would be useful to design more robust and secure medical deep learning systems.