Menu
Publications
2025
2024
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
Editor-in-Chief

Nikiforov
Vladimir O.
D.Sc., Prof.
Partners
doi: 10.17586/2226-1494-2024-24-6-1066-1070
Analysis of the vulnerability of YOLO neural network models to the Fast Sign Gradient Method attack
Read the full article

Article in Russian
For citation:
Abstract
For citation:
Teterev N.V., Trifonov V.E., Levina A.B. Analysis of the vulnerability of YOLO neural network models to the Fast Sign Gradient Method attack. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2024, vol. 24, no. 6, pp. 1066–1070 (in Russian). doi: 10.17586/2226-1494-2024-24-6-1066-1070
Abstract
The analysis of formalized conditions for creating universal images falsely classified by computer vision algorithms, called adversarial examples, on YOLO neural network models is presented. The pattern of successful creation of a universal destructive image depending on the generated dataset on which neural networks were trained using the Fast Sign Gradient Method attack is identified and studied. The specified pattern is demonstrated for YOLO8, YOLO9, YOLO10, YOLO11 classifier models trained on the standard COCO dataset.
Keywords: adversarial attacks, adversarial example, YOLO, COCO, dataset, neural network
Acknowledgements. The work was performed within the framework of the state assignment of the Ministry of Science and Higher Education of the Russian Federation No. 075-00003-24-01 dated 08.02.2024 (FSEE-2024-0003 project).
References
Acknowledgements. The work was performed within the framework of the state assignment of the Ministry of Science and Higher Education of the Russian Federation No. 075-00003-24-01 dated 08.02.2024 (FSEE-2024-0003 project).
References
- Chakraborty A., Alam M., Dey V., Chattopadhyay A., Mukhopadhyay D. Adversarial attacks and defences: A survey. arXiv, 2018, arXiv:1810.00069v1. https://doi.org/10.48550/arXiv.1810.00069
- Akhtar N., Mian A. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 2018, vol. 6, pp. 14410–14430. https://doi.org/10.1109/access.2018.2807385
- Goodfellow I., Shlens J., Szegedy C. Explaining and harnessing adversarial examples. Proc. of the 3rd International Conference on Learning Representations, ICLR 2015, 2015.
- Zhang C., Zhang H., Hsieh C.-J. An efficient adversarial attack for tree ensembles. Advances in Neural Information Processing Systems, 2020, vol. 33.
- Xiong P., Tegegn M., Sarin J.S., Pal S., Rubin J. It is all about data: A survey on the effects of data on adversarial robustness. ACM Computing Surveys, 2024, vol. 56, no. 7, pp. 1–41. https://doi.org/10.1145/3627817
- Zuo C. Regularization effect of fast gradient sign method and its generalization. arXiv, 2018, arXiv:1810.11711. https://doi.org/10.48550/arXiv.1810.11711
- Yosinski J., Clune J., Nguyen A., Fuchs T., Lipson H.Understanding neural networks through deep visualization. arXiv, 2015, arXiv:1506.06579v1. https://doi.org/10.48550/arXiv.1506.06579
- Carlini N., Wagner D. Towards evaluating the robustness of neural networks. Proc. of the IEEE Symposium on Security and Privacy (SP), 2017, pp. 39–57. https://doi.org/10.1109/sp.2017.49
- Li Z., Chen P.-Y., Liu S., Lu S., Xu Y. Zeroth-order optimization for composite problems with functional constraints. Proceedings of the AAAI Conference on Artificial Intelligence, 2022, vol. 36, no. 7, pp. 7453–7461. https://doi.org/10.1609/aaai.v36i7.20709
- Guo C., Gardner J., You Y., Wilson A., Weinberger K. Simple black-box adversarial attacks. Proceedings of Machine Learning Research, 2019, vol. 97, pp. 2484–2493.