doi: 10.17586/2226-1494-2019-19-4-722-729


IMAGE-BASED APPROACH FOR VEHICLE MODEL RE-IDENTIFICATION

N. S. Nemcev


Read the full article  ';
Article in Russian

For citation:
Nemcev N.S. Image-based approach for vehicle model re-identification. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2019, vol. 19, no. 4, pp. 722–729 (in Russian).
doi: 10.17586/2226-1494-2019-19-4-722-729


Abstract

Subject of Research. The paper presents a study of the existing methods for identifying and comparing the features of objects used in the re-identification task of vehicle model by its image. This task is one of the most important tasks facing automated traffic control systems, and it is solved by comparing the features of the vehicle being verified with a certain set of features obtained earlier by the monitoring system. Then decision is made whether the compared samples belong to the same vehicle model or to different ones. A method is proposed for feature vectors extraction and comparison of vehicle model according to its image. The method is based on the use of convolutional neural networks. The proposed approach is compared with existing algorithms for vehicle model re-identification by the accuracy criterion. Method. The paper describes the approach for vehicle image feature vector extraction and its subsequent comparison with the reference vector for similarity examination. The approach is based on the method of feature vector extraction, using classification convolutional neural network, and on comparison criterion for feature vectors applying the estimate of coincidental features. Main Results. The proposed vehicle model verification method demonstrates accuracy comparable to modern analogous in scenarios when the testing data have characteristics that coincide with training ones (similar camera model and camera angles are used; the level of lighting and noise are similar; models of re-identifiable vehicles are contained in the dataset used for the classification network training). In case of data significantly different from the training dataset, the method shows a lower computational complexity and uses smaller size of used feature vector and demonstrates significantly higher relative accuracy of re-identification. Practical Relevance. The proposed approach is practically applicable in vehicle identification task for highly loaded traffic control systems.


Keywords: visual data processing, machine learning, convolutional neural networks, feature extraction, feature comparison, Alexnet

References
1.     Liu H., Tian Y., Wang Y., Pang L., Huang T. Deep relative distance learning: Tell the difference between similar vehicles. Proc. IEEE Conf. on Computer Vision and Pattern Recognition. Las Vegas, USA, 2016, pp. 2167–2175. doi: 10.1109/cvpr.2016.238
2.        Rublee E. et al. ORB: An efficient alternative to SIFT or SURF. Proc. Int. Conf. on Computer Vision. Barcelona, Spain, 2011. doi: 10.1109/iccv.2011.6126544
3.        Pan X., Lyu S. Region duplication detection using image feature matching. IEEE Transactions on Information Forensics and Security, 2010, vol. 5, no. 4, pp. 857–867. doi: 10.1109/tifs.2010.2078506
4.        Zapletal D., Herout A. Vehicle re-identification for automatic video traffic surveillance. Proc. IEEE Conf. on Computer Vision and Pattern Recognition Workshops. Las Vegas, USA, 2016, pp. 25–31. doi: 10.1109/cvprw.2016.195
5.        Yang L., Luo P., Loy C.C., Tang X. A large-scale car dataset for fine-grained categorization and verification. Proc. IEEE Conf. on Computer Vision and Pattern Recognition. Boston, USA, 2015, pp. 3973–3981. doi:10.1109/cvpr.2015.7299023
6.        Koch G., Zemel R., Salakhutdinov R. Siamese neural networks for one-shot image recognition. Proc. 32nd Int. Conf. on Machine Learning. Lille, France, 2015.
7.        Cheng D. et al. Person re-identification by multi-channel parts-based CNN with improved triplet loss function. Proc. IEEE Conf. on Computer Vision and Pattern Recognition. Las Vegas, USA, 2016, pp. 1335–1344. doi: 10.1109/cvpr.2016.149
8.        Ke Y., Sukthankar R. PCA-SIFT: A more distinctive representation for local image descriptors. Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, 2004, vol. 4, pp. 506–513. doi: 10.1109/cvpr.2004.1315206
9.        Ng P.C., Henikoff S. SIFT: Predicting amino acid changes that affect protein function. Nucleic Acids Research, 2003, vol. 31, no. 13, pp. 3812–3814. doi: 10.1093/nar/gkg509
10.    Krizhevsky A., Sutskever I., Hinton G. E. Image Net classification with deep convolutional neural networks. Communication of the ACM, 2017, vol. 60, no. 6, pp. 84–90. doi: 10.1145/3065386
11.    Szegedy C. et al. Going deeper with convolutions. Proc. IEEE Conf. on Computer Vision and Pattern Recognition. Boston, USA, 2015. doi:10.1109/cvpr.2015.7298594
12.    Tanner M.A. Tools for Statistical Inference: Observed Data and Data Augmentation Methods. Springer, 2012, 110 p. doi: 10.1007/978-1-4684-0510-1
13.    John G.H., Kohavi R., Pfleger K. Irrelevant features and the subset selection problem. In Machine Learning Proceedings. Morgan Kaufmann, 1994, pp. 121–129. doi: 10.1016/b978-1-55860-335-6.50023-4
14.    Joachims T. Making Large-Scale SVM Learning Practical. Technical Report SFB 475. Komplex itäts reduktion in Multivariaten Datenstrukturen, Universität Dortmund, 1998.
15.    Hoffer E., Ailon N. Deep metric learning using triplet network. Proc. Similarity-Based Pattern Recognition, 2015, pp. 84–92. doi: 10.1007/978-3-319-24261-3_7
16.    Bogatyrev V.A., Bogatyrev S.V. Association reservation servers in clasters highly reliable computer system. Informatsionnye Tekhnologii, 2009, no. 6, pp. 41–47. (in Russian)
17.    Bogatyrev V.A., Bogatyrev S.V., Bogatyrev A.V. Clusters optimization with the limited availability of clusters groups. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2011, no. 1, pp. 63–67. (in Russian)
18.    Bogatyrev V.A. An optimum backup execution for the heterogeneous server system. Instruments and Systems: Monitoring, Control, and Diagnostics, 2007, no. 12, pp. 30–36. (in Russian)
19.    Xu B., Wang N., Chen T., Li M. Empirical evaluation of rectified activations in convolutional network. arXiv, arXiv:1505.00853, 2015.
20.    Howard A.G., Zhu M., Chen B., Kalenichenko D. et al. Mobilenets: efficient convolutional neural networks for mobile vision applications. arXiv, arXiv:1704.04861, 2017.
Krause J., Stark M., Deng J., Fei-Fei L. 3D object representations for fine-grained categorization. Proc. IEEE Int. Conf. on Computer Vision Workshops. Sydney, Australia, 2013


Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Copyright 2001-2024 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.

Яндекс.Метрика