DOI: 10.17586/2226-1494-2017-17-3-475-482


V. N. Shvedenko, A. S. Victorov

Read the full article 
Article in Russian

For citation: Shvedenko V.N., Victorov А.S. Improved visual odometry method for simultaneous unmanned aerial vehicle navigation and earth surface mapping. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2017, vol. 17, no. 3, pp. 475–482 (in Russian). doi: 10.17586/2226-1494-2017-17-3-475-482


The paper deals with application possibility of visual odometry algorithm for sparse three-dimensional reconstruction and earth surface mapping. Photography is taken by camera mounted on unmanned aerial vehicle during its flying along the specified trajectory. The sparse three-dimensional reconstruction and mapping are based on ability of visual odometry algorithm that retrieves information about geometry of specially selected landmarks on the basis of data received from inertial navigation system, and information retrieved from earth surface photographs. Simultaneously with earth surface reconstruction we define more precisely spatial position and orientation of aircraft that is important for acquisition of qualitative earth surface reconstruction with high resolution by means of stereophotogrammetry methods or by means of points clouds alignment methods in case of laser scanner usage. We have also proposed a method for quality improvement of visual odometry algorithm for precision increase of aircraft spatial position and orientation estimation, and also for earth surface reconstruction quality improvement. For visual odometry algorithm quality improvement we have proposed an original algorithm for detection of earth surface landmarks. Proposed modified visual odometry algorithm can find wide application for different autonomous vehicle navigation, and also as a part of informational system proposed for the Earth remote sensing data processing.

Keywords: visual odometry method, simultaneous navigation and mapping method, extended Kalman filter, binary classifier, reinforcement learning, neural network, convergence rate

1.     Ramirez-Torres J.G., Larranaga-Cepeda A. Real-time reconstruction of heightmaps from images taken with an UAV. Robotics and Mechatronics, 2015, vol. 37, pp. 221–231. doi: 10.1007/978-3-319-22368-1_22
2.     Holz D., Behnke S. Registration of non-uniform density 3D laser scans for mapping with micro aerial vehicles. Robotics and Autonomous Systems, 2015, vol. 74, pp. 318–330. doi: 10.1016/j.robot.2015.07.021
3.     Haala N., Cavegn S. High density aerial image matching: state-of-the-art and future prospects. Proc. 23rd Int. Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Prague, Czech,2016, vol. 41, pp. 625–630. doi: 10.5194/isprsarchives-XLI-B4-625-2016
4.     Fuentes-Pacheco J., Ruiz-Ascencio J., Rendon-Mancha J.M. Visual simultaneous localization and mapping: a survey. Artificial Intelligence Review, 2015, vol. 43, no. 1, pp. 55–81. doi: 10.1007/s10462-012-9365-8
5.     Caballero F., Merino L., Ferruz J., Ollero A. Vision-based odometry and SLAM for medium and high altitude flying UAVs. Journal of Intelligent and Robotic Systems, 2009, vol. 54, no. 1-3, pp. 137–161. doi: 10.1007/s10846-008-9257-y
6.     Mourikis A.I., Trawny N., Roumeliotis S.I., Johnson A.E., Ansar A., Matthies L. Vision-aided inertial navigation for spacecraft entry, descent, and landing. Journal of Intelligent and Robotic Systems, 2009, vol. 25, no. 2, pp. 264–280. doi: 10.1109/TRO.2009.2012342
7.     Teuliere C., Eck L., Marchand E., Guenard N. 3D model-based tracking for UAV position control. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems. Taipei, Taiwan, 2010, no. 1, pp. 1084–1089. doi: 10.1109/IROS.2010.5649700
8.     Huang Y., Song T.L. Iterated modified gain extended Kalman filter with applications to bearings only tracking. Journal of Automation and Control Engineering, 2015, vol. 3, no. 6, pp. 475–479. doi: 10.12720/joace.3.6.475-479
9.     Civera J., Davison A.J., Montiel J.M.M. Inverse depth parametrization for monocular SLAM. IEEE Transactions on Robotics, 2008, vol. 24, no. 5, pp. 932–945. doi: 10.1109/TRO.2008.2003276
10.  Marzorati D., Matteucci M., Migliore D., Sorrenti D.G. Monocular SLAM with inverse scaling parametrization. Proc. 19th British Machine Vision Conference. Leeds, UK, 2008, vol. 24, pp. 22–94. doi: 10.5244/C.22.94
11.  Khan R., Sottile F., Spirito M.A. Hybrid positioning through extended Kalman filter with inertial data fusion. International Journal of Information and Electronics Engineering, 2013, vol. 3, no. 1, pp. 127–131. doi: 10.7763/ijiee.2013.v3.281 
12.  Steffen R. A robust iterative Kalman filter based on implicit measurement equations. Photogrammetrie, Fernerkundung, Geoinformation, 2013, no. 4, pp. 323–332. doi: 10.1127/1432-8364/2013/0180
13.  Vedaldi A., Jin H., Favaro P., Soatto S. KALMANSAC: robust filtering by consensus. Pros. 10th IEEE Int. Conf. on Computer Vision. Los Angeles, USA, 2005, vol. 1, pp. 633–640. doi: 10.1109/ICCV.2005.130
14.  Gers F.A., Schraudolph N.N., Schmidhuber J. Learning precise timing with LSTM pecurrent networks. The Journal of Machine Learning Research, 2003, vol. 3, no. 1, pp. 115–143. doi: 10.1162/153244303768966139
15.  Mnih V., Kavukcuoglu K., Silver D., Graves A., Antonoglou I., Wierstra D., Riedmiller M. Playing Atari with deep reinforcement learning. NIPS Deep Learning Workshop, 2013, arXiv:1312.5602

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Copyright 2001-2019 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.