Menu
Publications
2024
2023
2022
2021
2020
2019
2018
2017
2016
2015
2014
2013
2012
2011
2010
2009
2008
2007
2006
2005
2004
2003
2002
2001
Editor-in-Chief
Nikiforov
Vladimir O.
D.Sc., Prof.
Partners
doi: 10.17586/2226-1494-2024-24-5-806-814
Comparative analysis of neural network models for felling mapping in summer satellite imagery
Read the full article ';
Article in Russian
For citation:
Abstract
For citation:
Melnikov A.V., Polishchuk Yu.M., Rusanov M.A., Abbazov V.R., Kochergin G.A., Kupriyanov M.A., Baisalyamova O.A., Sokolkov O.I. Comparative analysis of neural network models for felling mapping in summer satellite imagery. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2024, vol. 24, no. 5, pp. 806–814 (in Russian). doi: 10.17586/2226-1494-2024-24-5-806-814
Abstract
The study aimed to improve the efficiency of detecting and mapping felling using satellite imagery, in order to identify violations of environmental regulations. Traditional remote sensing data interpretation methods are labor-intensive and require high operator expertise. To automate the satellite image interpretation process, numerous approaches have been developed, including those leveraging advanced deep machine learning technologies. The presented work conducted a comparative analysis of convolutional and transformer neural network models for the segmentation of felling in summer Sentinel-2 satellite imagery. The convolutional models evaluated included U-Net++, MA-Net, 3D U-Net, and FPN-ConvLSTM, while the transformer models were SegFormer and Swin-UperNet. A key aspect was the adaptation of these models to analyze pairs of multi-temporal, multi-channel satellite images. The data preprocessing, training sample generation, and model training and evaluation procedures using the F1 metric are described. The modeling results were compared to traditional visual interpretation methods using GIS tools. Experiments on the territory of the Khanty-Mansiysk Autonomous Okrug showed that the F1 accuracy of the different models ranged from 0.409 to 0.767, with the SegFormer transformer model achieving the highest performance and detecting felling missed by human interpretation. The processing time for a 100 × 100 km2 image pair was 15 minutes, 16 times faster than manual methods — an important factor for large-scale forest monitoring. The proposed SegFormer-based felling segmentation approach can be used for rapid detection and mapping of illegal logging. Further improvements could involve balancing the training dataset to include more diverse clearing shapes and sizes as well as incorporating partially cloudy images.
Keywords: felling mapping, satellite imagery, deep machine learning, neural network models, image segmentation, forest area
monitoring
References
References
- Gabdrakhmanov R.M., Kochergin G.A., Kupriianov M.A., Khamedov V.A., Sharafutdinov R.R. Register of changes in the forest fund of Khanty-Mansiysk Autonomous Okrug – Yugra. Certificate of registration of the databaseRU2016620648, 2016.
- Torres D.L., Turnes J.N., Soto Vega P.J., Feitosa R.Q., Silva D.E., Marcato Junior J., Almeida C. Deforestation detection with fully convolutional networks in the Amazon Forest from Landsat-8 and Sentinel-2 images. Remote Sensing, 2021, vol. 13, no. 24, pp. 5084. https://doi.org/10.3390/rs13245084
- Khan S.H., He X., Porikli F., Bennamoun M. Forest change detection in incomplete satellite images with deep neural networks. IEEE Transactions on Geoscience and Remote Sensing, 2017, vol. 55, no. 9, pp. 5407–5423. https://doi.org/10.1109/tgrs.2017.2707528
- John D., Zhang C. An attention-based U-Net for detecting deforestation within satellite sensor imagery. International Journal of Applied Earth Observation and Geoinformation, 2022, vol. 107, pp. 102685. https://doi.org/10.1016/j.jag.2022.102685
- Podoprigorova N.S., Savchenko G.A., Rabcevich K.R., Kanev A.I., Tarasov A.V., Shikohov A.N. Forest damage segmentation using machine learning methods on satellite images. Studies in Computational Intelligence, 2023, vol. 1120, pp. 380–388. https://doi.org/10.1007/978-3-031-44865-2_41
- Bychkov I.V., Ruzhnikov G.M., Fedorov R.K., Popova A.K., Avramenko Y.V. Classification of Sentinel-2 satellite images of the Baikal Natural Territory. Computer Optics, 2022, vol. 46, no. 1, pp. 90–96. (in Russian). https://doi.org/10.18287/2412-6179-co-1022
- Melnikov A.V., Kochergin G.A., Abbazov V.R., Baisalamova O.A., Rusanov M.A., Polishchuk Yu.M. A neural network model for space image segmentation in monitoring of deforestation factors. Bulletin of the South Ural State University. Series Computer Technology, Aotimatic Control, Radio Electronics, 2023, vol. 23, no. 3, pp. 5–15. (in Russian). https://doi.org/10.14529/ctcr230301
- Main-Knorn M., Pflug B., Louis J., Debaecker V., Müller-Wilm U., Gascon F. Sen2Cor for Sentinel-2. Proceedings of SPIE, 2017, vol. 10427, pp. 1042704. https://doi.org/10.1117/12.2278218
- Garnot V.S.F., Landrieu L. Panoptic segmentation of satellite image time series with convolutional temporal attention networks. Proc. of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 4852–4861. https://doi.org/10.1109/iccv48922.2021.00483
- Rustowicz R., Cheong R., Wang L., Ermon S., Burke M., Lobell D. Semantic segmentation of crop type in Africa: A novel dataset and analysis of deep learning methods. Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshop,. 2019, pp. 75–82.
- Fan T., Wang G., Li Y., Wang H. MA-Net: A multi-scale attention network for liver and tumor segmentation. IEEE Access, 2020, vol. 8, pp. 179656–179665. https://doi.org/10.1109/access.2020.3025372
- Chamorro Martinez J.A., Cué La Rosa L.E., Feitosa R.Q., Sanches I.D., Happ P.N. Fully convolutional recurrent networks for multidate crop recognition from multitemporal image sequences. ISPRS Journal of Photogrammetry and Remote Sensing, 2021, vol. 171, pp. 188–201. https://doi.org/10.1016/j.isprsjprs.2020.11.007
- Shi X., Chen Z., Wang H., Yeung D.-Y., Wong W., Woo W. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. arXiv, 2015, arXiv:1506.04214. https://doi.org/10.48550/arXiv.1506.04214
- Xie E., Wang W., Yu Z., Anandkumar A., Alvarez J.M., Luo P. SegFormer: Simple and efficient design for semantic segmentation with transformers nowcasting. arXiv, 2021, arXiv:2105.15203. https://doi.org/10.48550/arXiv.2105.15203
- Liu Z., Lin Y., Cao Y., Hu H., Wei Y., Zhang Z., Lin S., Guo B. Swin transformer: Hierarchical vision transformer using shifted Windows. Proc. of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 9992–10002. https://doi.org/10.1109/iccv48922.2021.00986
- Kruitwagen L. Towards DeepSentinel: An extensible corpus of labelled Sentinel-1 and -2 imagery and a general-purpose sensor-fusion semantic embedding model. arXiv, 2021, arXiv:2102.06260. https://doi.org/10.48550/arXiv.2102.06260
- Betzalel E., Penso C., Navon A., Fetaya E. A study on the evaluation of generative models. arXiv, 2022, arXiv:2206.10935. https://doi.org/10.48550/arXiv.2206.10935