IMPROVEMENT OF RECOGNITION QUALITY IN DEEP LEARNING NETWORKS BY SIMULATED ANNEALING METHOD

A. S. Potapov, V. V. Batishcheva, P. . Shu-Chao


Read the full article 
Article in Russian


Abstract

The subject of this research is deep learning methods, in which automatic construction of feature transforms is taken place in tasks of pattern recognition. Multilayer autoencoders have been taken as the considered type of deep learning networks. Autoencoders perform nonlinear feature transform with logistic regression as an upper classification layer. In order to verify the hypothesis of possibility to improve recognition rate by global optimization of parameters for deep learning networks, which are traditionally trained layer-by-layer by gradient descent, a new method has been designed and implemented. The method applies simulated annealing for tuning connection weights of autoencoders while regression layer is simultaneously trained by stochastic gradient descent. Experiments held by means of standard MNIST handwritten digit database have shown the decrease of recognition error rate from 1.1 to 1.5 times in case of the modified method comparing to the traditional method, which is based on local optimization. Thus, overfitting effect doesn’t appear and the possibility to improve learning rate is confirmed in deep learning networks by global optimization methods (in terms of increasing recognition probability). Research results can be applied for improving the probability of pattern recognition in the fields, which require automatic construction of nonlinear feature transforms, in particular, in the image recognition.  Keywords: pattern recognition, deep learning, autoencoder, logistic regression, simulated annealing.


Keywords: cognition, deep learning, autoencoder, logistic regression, simulated annealing

Acknowledgements. The work is supported by the Ministry of Education and Science of the Russian Federation and the Russian Federation President’s Council for Grants (grant MD-1072.2013.9), and partially financially supported by the Government of the Russian Federation ( grant 074-U01).

References
1.     e Y., KavukcuogluK., Wang Y., Szlam A., Qi Y. Unsupervised Feature Learning by Deep Sparse Coding. 2013.Availableat: http://arxiv.org/pdf/1312.5783v1 (accessed03.07.2014).
2.     Arnold L., Rebecchi S., Chevallier S., Paugam-Moisy H. An introduction to deep learning. Proc. 19th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2011. Bruges, Belgium, 2011, pp. 477–488.
3.     Ciresan D.C., Meier U., Masci J., Schmidhuber J. Multi-column deep neural network for traffic sign classification. Neural Networks, 2012, vol. 32, pp. 333–338. doi: 10.1016/j.neunet.2012.02.023
4.     Mnih V., Kavukcuoglu K., Silver D., Graves A., Antonoglou I., Wierstra D., Riedmiller M. Playing Atari with Deep Reinforcement Learning. 2013. Availableat: http://arxiv.org/pdf/1312.5602v1.pdf(accessed03.07.2014).
5.     Le Roux N., Bengio Y. Representational power of restricted boltzmann machines and deep belief networks. Neural Computation, 2008, vol. 20, no. 6, pp. 1631–1649. doi: 10.1162/neco.2008.04-07-510
6.     Gregor K., Mnih A., Wierstra D., Blundell C., Wiersta D. Deep Autoregressive Networks. 2013. Availableat:http://arxiv.org/pdf/1310.8499v2(accessed03.07.2014).
7.     Tenenbaum J.B., Kemp C., Griffiths T.L., Goodman N.D. How to grow a mind: statistics, structure, and abstraction. Science, 2011, vol. 331, no. 6022, pp. 1279–1285. doi: 10.1126/science.1192788
8.     Szegedy Ch., Zaremba W., Sutskever I., Bruna J., Erhan D., Goodfellow I., Fergus R. Intriguing properties of neural networks. 2014. Available at:http://arxiv.org/pdf/1312.6199v4(accessed 03.07.2014).
9.     Bengio Y., Lamblin P., Popovici D., Larochelle H. Greedy layer-wise training of deep networks. Advances in Neural Information Processing Systems, 2007, vol. 19, pp. 153–160.
10.Hinton G.E., Osindero S., Teh Y.-W. A fast learning algorithm for deep belief nets. Neural Computation, 2006, vol. 18, no. 7, pp. 1527–1554. doi: 10.1162/neco.2006.18.7.1527
11.Ranzato M.A., Poultney Ch., Chopra S., LeCun Y. Efficient learning of sparse representations with an energy-based model. Advances in Neural Information Processing Systems, 2007, vol. 19, pp. 1137–1144.
12.Ciresan D.C., Meier U., Gambardella L.M., Schmidhuber J. Deep Big Simple Neural Nets Excel on Hand-written Digit Recognition. 2010. Availableat:http://arxiv.org/pdf/1003.0358 (accessed03.07.2014).
13.Tsarev F.N. Sovmestnoe primenenie geneticheskogo programmirovaniya, konechnykh avtomatov i iskusstvennykh neironnykh setei dlya postroeniya sistemy upravleniya bespilotnym letatel'nym apparatom [Application of genetic programming, finite state machines and neural nets for construction of control system for unnmaned aircraft]. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2008, no. 8 (53), pp. 42–60.
14.Воndarenko I.B., Gatchin Yu.A., Geranichev V.N. Sintez optimal'nykh iskusstvennykh neironnykh setei s pomoshch'yu modifitsirovannogo geneticheskogo algoritma [Synthesis of optimal artificial neural networks by modified genetic algorithm]. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2012, no. 2 (78), pp. 51–55.
15.Vincent P., Larochelle H., Bengio Y., Manzagol P.-A. Extracting and composing robust features with denoising autoencoders. Proc. 25th International Conference on Machine Learning. Helsinki, Finland, 2008, pp. 1096–1103.
16.LeCun Y., Cortes C., Burges C.J.C. The MNIST Database of handwritten digits. Availableat:http://yann.lecun.com/exdb/mnist (accessed03.07.2014).
Copyright 2001-2017 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.

Яндекс.Метрика