doi: 10.17586/2226-1494-2025-25-1-128-139


Detection of L0-optimized attacks via anomaly scores distribution analysis

D. A. Esipov, M. I. Basov, A. D. Kletenkova


Read the full article  ';
Article in English

For citation:
Esipov D.A., Basov M.I., Kletenkova A.D. Detection of L0-optimized attacks via anomaly scores distribution analysis. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2025, vol. 25, no. 1, pp. 128–139. doi: 10.17586/2226-1494-2025-25-1-128-139


Abstract

The spread of artificial intelligence and machine learning is accompanied by an increase in the number of vulnerabilities and threats in systems implementing such technologies. Attacks based on malicious perturbations pose a significant threat to such systems. Various solutions have been developed to protect against them, including an approach to detecting L0- optimized attacks on neural image processing networks using statistical analysis methods and an algorithm for detecting such attacks by threshold clipping. The disadvantage of the threshold clipping algorithm is the need to determine the value of the parameter (cutoff threshold) to detect various attacks and take into account the specifics of the data sets, which makes it difficult to apply in practice. This article describes a method for detecting L0-optimized attacks on neural image processing networks through statistical analysis of the distribution of anomaly scores. To identify the distortion inherent in L0-optimized attacks, deviations from the nearest neighbors and Mahalanobis distances are determined. Based on their values, a matrix of pixel anomaly scores is calculated. It is assumed that the statistical distribution of pixel anomaly scores is different for attacked and non-attacked images and for perturbations embedded in various attacks. In this case, attacks can be detected by analyzing the statistical characteristics of the distribution of anomaly scores. The obtained characteristics are used as predictors for training anomaly detection and image classification models. The method was tested on the CIFAR-10, MNIST and ImageNet datasets. The developed method demonstrated the high quality of attack detection and classification. On the CIFAR-10 dataset, the accuracy of detecting attacks (anomalies) was 98.43 %, while the binary and multiclass classifications were 99.51 % and 99.07 %, respectively. Despite the fact that the accuracy of anomaly detection is lower than that of a multiclass classification, the method allows it to be used to distinguish fundamentally similar attacks that are not contained in the training sample. Only input data is used to detect and classify attacks, as a result of which the proposed method can potentially be used regardless of the architecture of the model or the presence of the target neural network. The method can be applied for detecting images distorted by L0-optimized attacks in a training sample.


Keywords: artificial neural network, image processing, adversarial attack, attack detection, pseudonorm L0, malicious perturbation, statistical analysis, anomaly score

References
  1. Esipov D.A., Buchaev A.Y., Kerimbay A., Puzikova Y.V., Saidumarov S.K., Sulimenko N.S., Popov I.Yu., Karmanovskiy N.S. Attacks based on malicious perturbations on image processing systems and defense methods against them. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2023, vol. 23, no. 4, pp. 720–733. (in Russian). https://doi.org/10.17586/2226-1494-2023-23-4-720-733
  2. Esipov D.A. An approach to detecting L0-optimized attacks on image processing neural networks via means of mathematical statistics. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2024, vol. 24, no. 3, pp. 490–499. https://doi.org/10.17586/2226-1494-2024-24-3-490-499
  3. Nguyen-Son H.Q., Thao T.P., Hidano S., Bracamonte V., Kiyomoto S., Yamaguchi R.S. Opa2d: One-pixel attack, detection, and defense in deep neural networks. Proc. of the International Joint Conference on Neural Networks (IJCNN), 2021, pp. 1–10. https://doi.org/10.1109/IJCNN52387.2021.9534332
  4. Alatalo J., Sipola T., Kokkonen T. Detecting One-Pixel Attacks Using Variational Autoencoders. Lecture Notes in Networks and Systems, 2022, vol. 468, pp. 611–623. https://doi.org/10.1007/978-3-031-04826-5_60
  5. Wang P., Cai Z., Kim D., Li W. Detection mechanisms of one-pixel attack. Wireless Communications and Mobile Computing, 2021, vol. 2021, no. 1, pp. 8891204. https://doi.org/10.1155/2021/8891204
  6. Grosse K., Manoharan P., Papernot N., Backes M., McDaniel P. On the (statistical) detection of adversarial examples. arXiv, 2017, arXiv:1702.06280. https://doi.org/10.48550/arXiv.1702.06280
  7. Guo F., Zhao Q., Li X., Kuang X., Zhang J., Han Y., Tan Y.A. Detecting adversarial examples via prediction difference for deep neural networks. Information Sciences, 2019, vol. 501, pp. 182–192. https://doi.org/10.1016/j.ins.2019.05.084
  8. Su J., Vargas D.V., Sakurai K. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 2019, vol. 23, no. 5, pp. 828–841. https://doi.org/10.1109/TEVC.2019.2890858
  9. Papernot N., McDaniel P., Jha S., Fredrikson M., Celik Z.B., Swami A. The limitations of deep learning in adversarial settings. Proc. of the IEEE European symposium on security and privacy (EuroS&P), 2016, pp. 372–387. https://doi.org/10.1109/EuroSP.2016.36 
  10. Karmon D., Zoran D., Goldberg Y. Lavan: Localized and visible adversarial noise. arXiv, 2018, arXiv:1801.02608. https://doi.org/10.48550/arXiv.1801.02608
  11. Lampert C.H. Kernel methods in computer vision. Foundations and Trends in Computer Graphics and Vision, 2009, vol. 4, no. 3, pp. 193–285. http://dx.doi.org/10.1561/0600000027
  12. Bounsiar A., Madden M.G. One-class support vector machines revisited. Proc. of the 5th International Conference on Information Science & Applications (ICISA), 2014, pp. 1–4. https://doi.org/10.1109/ICISA.2014.6847442
  13. Tax D.M.J., Duin R.P.W. Support vector data description. Machine Learning, 2004, vol. 54, no. 1, pp. 45–66. https://doi.org/10.1023/B:MACH.0000008084.60811.49  
  14. Liu F.T., Ting K.M., Zhou Z.H. Isolation forest. Proc. of the 8th IEEE International Conference on Data Mining, 2008, pp. 413–422. https://doi.org/10.1109/ICDM.2008.17
  15. Ji Y., Wang Q., Li X., Liu J. A survey on tensor techniques and applications in machine learning. IEEE Access. 2019, vol. 7, pp. 162950–162990. https://doi.org/10.1109/ACCESS.2019.2949814
  16. Howard S. The Elliptical Envelope. arXiv, 2007, arXiv:math/0703048. https://doi.org/10.48550/arXiv.math/0703048
  17. Ashrafuzzaman M., Das S., Jillepalli A.A., Chakhchoukh Y., Sheldon F.T. Elliptic envelope based detection of stealthy false data injection attacks in smart grid control systems. Proc. of the IEEE Symposium Series on Computational Intelligence (SSCI), 2020, pp. 1131–1137. https://doi.org/10.1109/SSCI47803.2020.9308523
  18. Hearst M.A., Dumais S.T., Osuna E., Platt J., Scholkopf B. Support vector machines. IEEE Intelligent Systems and their applications, 1998, vol. 13, no. 4, pp. 18–28. https://doi.org/10.1109/5254.708428
  19. Ho T.K. The random subspace method for constructing decision forests. IEEE transactions on pattern analysis and machine intelligence, 1998, vol. 20, no. 8, pp. 832–844. https://doi.org/10.1109/34.709601
  20. Wright R.E. Logistic regression. Reading and understanding multivariate statistics. American Psychological Association, 1995, pp. 217–244. 
  21. Pedregosa F., Varoquaux G., Gramfort A., Michel V., Thirion B., Grisel O., Blondel M., Prettenhofer P., Weiss R., Dubourg V., Vanderplas J., Passos A., Cournapeau D., Brucher M., Perrot M., Duchesnay É. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 2011, vol. 12, pp. 2825–2830. 
  22. Sedgwick P. Pearson’s correlation coefficient. British Medical Journal, 2012, vol. 345, pp. e4483. https://doi.org/10.1136/bmj.e4483
  23. Abd Al-Hameeda K.A. Spearman's correlation coefficient in statistical analysis. International Journal of Nonlinear Analysis and Applications, 2022, vol. 13, no. 1. pp. 3249–3255. https://doi.org/10.22075/ijnaa.2022.6079
  24. Abdi H. The Kendall rank correlation coefficient. Encyclopedia of measurement and statistics. SAGE Publications, 2007. vol. 2, pp. 508–510. 
  25. Xu W., Hou Y., Hung Y.S., Zou Y. A comparative analysis of Spearman's rho and Kendall's tau in normal and contaminated normal models. Signal Processing, 2013, vol. 93, no. 1, pp. 261–276. https://doi.org/10.1016/j.sigpro.2012.08.005
  26. Zhong H., Liao C., Squicciarini A.C, Zhu S., Miller D. Backdoor embedding in convolutional neural network models via invisible perturbation. Proc. of the 10th ACM Conference on Data and Application Security and Privacy (CODASPY), 2020, pp. 97–108. https://doi.org/10.1145/3374664.3375751


Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Copyright 2001-2025 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.

Яндекс.Метрика