IMPROVEMENT OF RECOGNITION QUALITY IN DEEP LEARNING NETWORKS BY SIMULATED ANNEALING METHOD

A. S. Potapov, V. V. Batishcheva, P. . Shu-Chao


Read the full article 

Abstract

 The subject of this research is deep learning methods, in which automatic construction of feature transforms is taken place in tasks of pattern recognition. Multilayer autoencoders have been taken as the considered type of deep learning networks. Autoencoders perform nonlinear feature transform with logistic regression as an upper classification layer. In order to verify the hypothesis of possibility to improve recognition rate by global optimization of parameters for deep learning networks, which are traditionally trained layer-by-layer by gradient descent, a new method has been designed and implemented. The method applies simulated annealing for tuning connection weights of autoencoders while regression layer is simultaneously trained by stochastic gradient descent. Experiments held by means of standard MNIST handwritten digit database have shown the decrease of recognition error rate from 1.1 to 1.5 times in case of the modified method comparing to the traditional method, which is based on local optimization. Thus, overfitting effect doesn’t appear and the possibility to improve learning rate is confirmed in deep learning networks by global optimization methods (in terms of increasing recognition probability). Research results can be applied for improving the probability of pattern recognition in the fields, which require automatic construction of nonlinear feature transforms, in particular, in the image recognition.  Keywords: pattern recognition, deep learning, autoencoder, logistic regression, simulated annealing.


Keywords: cognition, deep learning, autoencoder, logistic regression, simulated annealing

Acknowledgements. The work is supported by the Ministry of Education and Science of the Russian Federation and the Russian Federation President’s Council for Grants (grant MD-1072.2013.9), and partially financially supported by the Government of the Russian Federation ( grant 074-U01).

References

1. He Y., Kavukcuoglu K., Wang Y., Szlam A., Qi Y. Unsupervised Feature Learning by Deep Sparse Coding [Электронный ресурс]. 2013. Режим доступа: http://arxiv.org/pdf/1312.5783v1, свободный. Яз. англ. (дата обращения 03.07.2014).

2. Arnold L., Rebecchi S., Chevallier S., Paugam-Moisy H. An introduction to deep learning // Proc. 19th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2011). Bruges, Belgium, 2011. P. 477–488.

3. Ciresan D.C., Meier U., Masci J., Schmidhuber J. Multi-column deep neural network for traffic sign classification // Neural Networks. 2012. V. 32. P. 333–338.

4. Mnih V., Kavukcuoglu K., Silver D., Graves A., Antonoglou I., Wierstra D., Riedmiller M. Playing Atari with Deep Reinforcement Learning [Электронный ресурс]. 2013. Режим доступа: http://arxiv.org/pdf/1312.5602v1.pdf, свободный. Яз. англ. (дата обращения 03.07.2014).

5. Le Roux N., Bengio Y. Representational power of restricted boltzmann machines and deep belief networks // Neural Computation. 2008. V. 20. N 6. P. 1631–1649.

6. Gregor K., Mnih A., Wierstra D., Blundell C., Wiersta D. Deep Autoregressive Networks [Электронный ресурс]. 2013. Режим доступа: http://arxiv.org/pdf/1310.8499v2, свободный. Яз. англ. (дата обращения 03.07.2014).

7. Tenenbaum J.B., Kemp C., Griffiths T.L., Goodman N.D. How to grow a mind: statistics, structure, and abstraction // Science. 2011. V. 331. N 6022. P. 1279–1285.

8. Szegedy Ch., Zaremba W., Sutskever I., Bruna J., Erhan D., Goodfellow I., Fergus R. Intriguing properties of neural networks [Электронный ресурс]. 2014. Режим доступа: http://arxiv.org/pdf/1312.6199v4, свобод- ный. Яз. англ. (дата обращения 03.07.2014).

9. Bengio Y., Lamblin P., Popovici D., Larochelle H. Greedy layer-wise training of deep networks // Advances in Neural Information Processing Systems. 2007. V. 19. P. 153–160.

10. Hinton G.E., Osindero S., Teh Y.-W. A fast learning algorithm for deep belief nets // Neural Computation. 2006. V. 18. N 7. P. 1527–1554.

11. Ranzato M.A., Poultney Ch., Chopra S., LeCun Y. Efficient learning of sparse representations with an energy-based model // Advances in Neural Information Processing Systems. 2007. V. 19. P. 1137–1144.

12. Ciresan D.C., Meier U., Gambardella L.M., Schmidhuber J. Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition [Электронный ресурс]. 2010. Режим доступа: http://arxiv.org/pdf/1003.0358, свободный. Яз. англ. (дата обращения 03.07.2014).

13. Царев Ф.Н. Совместное применение генетического программирования, конечных автоматов и искус- ственных нейронных сетей для построения системы управления беспилотным летательным аппара- том // Научно-технический вестник СПбГУ ИТМО. 2008. № 8 (53). С. 42–60.

14. Бондаренко И.Б., Гатчин Ю.А., Гераничев В.Н. Синтез оптимальных искусственных нейронных сетей с помощью модифицированного генетического алгоритма // Научно-технический вестник информа- ционных технологий, механики и оптики. 2012. № 2 (78). С. 51–55.

15. Vincent P., Larochelle H., Bengio Y., Manzagol P.-A. Extracting and composing robust features with denoising autoencoders // Proc. 25th International Conference on Machine Learning. Helsinki, Finland, 2008. P. 1096–1103.

16. LeCun Y., Cortes C., Burges C.J.C. The MNIST Database of handwritten digits [Электронный ресурс]. Режим доступа: http://yann.lecun.com/exdb/mnist/, свободный. Яз. англ. (дата обращения 03.07.2014). 

Copyright 2001-2017 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.

Яндекс.Метрика