doi: 10.17586/2226-1494-2024-24-1-118-123


Monocular depth estimation for 2D mapping of simulated environments

M. Barhoum, A. A. Pyrkin


Read the full article  ';
Article in English

For citation:
Barhoum M., Pyrkin A.A. Monocular depth estimation for 2D mapping of simulated environments. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2024, vol. 24, no. 1, pp. 118–123. doi: 10.17586/2226-1494-2024-24-1-118-123


Abstract
This article addresses the problem of constructing maps for 2D simulated environments. An algorithm based on monocular depth estimation is proposed achieving comparable accuracy to methods utilizing expensive sensors such as RGBD cameras and LIDARs. To solve the problem, we employ a multi-stage approach. First, a neural network predicts a relative disparity map from an RGB flow provided by RGBD camera. Using depth measurements from the same camera, two parameters are estimated that connect the relative and absolute displacement maps in the form of a linear regression relation. Based on a simpler RGB camera, by applying a neural network and estimates of scaling parameters, an estimate of the absolute displacement map is formed, which allows to obtain an estimate of the depth map. Thus, a virtual scanner has been designed providing Cartographer SLAM with depth information for environment mapping. The proposed algorithm was evaluated on a ROS 2.0 simulation of a simple mobile robot. It achieves faster depth prediction compared to other depth estimation algorithms. Furthermore, maps generated by our approach demonstrated a high overlap ratio with those obtained using an ideal RGBD camera. The proposed algorithm can find applicability in crucial tasks for mobile robots, like obstacle avoidance, and path planning. Moreover, it can be used to generate accurate cost maps, enhancing safety and adaptability in mobile robot navigation.

Keywords: monocular depth estimation, mapping, linear regression, disparity maps, neural network

Acknowledgements. This paper was supported by the Ministry of Science and Higher Education of the Russian Federation (State Assignement No. 2019-0898).

References
  1. Bhat S.F., Alhashim I., Wonka P. AdaBins: Depth estimation using adaptive bins. Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4009–4018. https://doi.org/10.1109/cvpr46437.2021.00400
  2. Li Z., Wang X., Liu X., Jiang J. BinsFormer: Revisiting adaptive bins for monocular depth estimation. arXiv, 2022, arXiv:2204.00987. https://doi.org/10.48550/arXiv.2204.00987
  3. Zhang S., Yang L., Mi M.B., Zheng X., Yao A. Improving deep regression with ordinal entropy. arXiv, 2023, arXiv:2301.08915. https://doi.org/10.48550/arXiv.2301.08915
  4. Ranftl R., Bochkovskiy A., Koltun V. Vision transformers for dense prediction. Proc. of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 12179–12188. https://doi.org/10.1109/iccv48922.2021.0119
  5. Xie Z., Geng Z., Hu J., Zhang Z., Hu H., Cao Y. Revealing the dark secrets of masked image modeling. Proc. of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 14475–14485. https://doi.org/10.1109/cvpr52729.2023.01391
  6. Birkl R., Wofk D., Müller M. MiDaS v3.1 – A Model zoo for robust monocular relative depth estimation. arXiv, 2023, arXiv:2307.14460. https://doi.org/10.48550/arXiv.2307.14460
  7. Bhat S.F., Birkl R., Wofk D., Wonka P., Müller M. ZoeDepth: Zero-shot transfer by combining relative and metric depth. arXiv, 2023, arXiv:2302.12288. https://doi.org/10.48550/arXiv.2302.12288
  8. Ranftl R., Lasinger K., Hafner D., Schindler K., Koltun V. Towards robust monocular depth estimation: mixing datasets for zero-shot cross-dataset transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, vol. 44, no. 3, pp. 1623–1637 https://doi.org/10.1109/tpami.2020.3019967
  9. Xing X., Cai Y., Lu T., Yang Y., Wen D. Joint self-supervised monocular depth estimation and SLAM. Proc. of the 26th International Conference on Pattern Recognition (ICPR), 2022, pp. 4030–4036. https://doi.org/10.1109/icpr56361.2022.9956576
  10. Geng M., Shang S., Ding B., Wang H., Zhang P. Unsupervised learning-based depth estimation-aided visual slam approach. Circuits, Systems, and Signal Processing, 2020, vol. 39, pp. 543–570. https://doi.org/10.1007/s00034-019-01173-3
  11. Li Z., Snavely N. MegaDepth: Learning single-view depth prediction from internet photos. Proc. of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 2041–2050. https://doi.org/10.1109/cvpr.2018.00218
  12. Tran M., Ly N. Mobile robot planner with low-cost cameras using deep reinforcement learning. Proc. of the 7th NAFOSTED Conference on Information and Computer Science (NICS), 2020, pp. 54–59. https://doi.org/10.1109/nics51282.2020.9335852
  13. Hess W., Kohler D., Rapp H., Andor D. Real-time loop closure in 2D LIDAR SLAM. Proc. of the 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 1271–1278. https://doi.org/10.1109/icra.2016.7487258
  14. Eigen D., Puhrsch C., Fergus R. Depth map prediction from a single image using a multi-scale deep network. Advances in Neural Information Processing Systems, 2014, vol. 27, pp. 2, 5, 6.
  15. Garg R., Kumar B.G.V., Carneiro G., Reid I. Unsupervised CNN for single view depth estimation: Geometry to the rescue. Lecture Notes in Computer Science, 2016, vol. 9912, pp. 740–756. https://doi.org/10.1007/978-3-319-46484-8_45


Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.

Яндекс.Метрика