DOI: 10.17586/2226-1494-2018-18-2-236-242


KNOWLEDGE TRANSFER FOR RUSSIAN CONVERSATIONAL TELEPHONE AUTOMATIC SPEECH RECOGNITION

A. N. Romanenko , Y. N. Matveev, W. Minker


Read the full article 
Article in Russian

For citation: Romanenko A.N., Matveev Yu.N., Minker W. Knowledge transfer for Russian conversational telephone automatic speech recognition. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2018, vol. 18, no. 2, pp. 236–242 (in Russian). doi: 10.17586/2226-1494-2018-18-2-236-242

Abstract

This paper describes the method of knowledge transfer between the ensemble of neural network acoustic models and student-network. This method is used to reduce computational costs and improve the quality of the speech recognition system. The experiments consider two variants of generation of class labels from the ensemble of models: interpolation with alignment, and the posteriori probabilities. Also, the quality of models was studied in relation with the smoothing coefficient. This coefficient was built into the output log-linear classifier of the neural network (softmax layer) and was used both in the ensemble and in the student-network. Additionally, the initial and final learning rates were analyzed. We were successful in relationship establishing between the usage of the smoothing coefficient for generation of the posteriori probabilities and the parameters of the learning rate. Finally, the application of the knowledge transfer for the automatic recognition of Russian conversational telephone speech gave the possibility to reduce the WER (Word Error Rate) by 2.49%, in comparison with the model trained on alignment from the ensemble of neural networks.


Keywords: knowledge transfer, smoothing coefficient, softmax, automatic speech recognition, ensemble of neural networks, student-network, conversational telephone speech

Acknowledgements. The research is supported by the Ministry of Education and Science of the Russian Federation, contract No.8.9971.2017/DAAD

References
 
  1. Medennikov I., Prudnikov A. Advances in STC Russian spontaneous speech recognition system. Lecture Notes in Computer Science, 2016, vol. 9811, pp. 116–123. doi: 10.1007/978-3-319-43958-7_13
  2. Siohan O., Rybach D. Multitask learning and system combination for automatic speech recognition. Proc. IEEE Workshop on Automatic Speech Recognition and Understanding. Scottsdale, USA, 2015, pp. 589–595. doi: 10.1109/ASRU.2015.7404849
  3. Hartmann W., Zhang L., Barnes K. et al. Comparison of multiple system combination techniques for keyword spotting. Proc. INTERSPEECH. San Francisco, USA, 2016, pp. 1913–1917. doi: 10.21437/Interspeech.2016-1381
  4. Hinton G., Vinyals O., Dean J. Distilling knowledge in a neural network. Proc. NIPS 2014 Deep Learning Workshop. Montreal, Canada, 2014. arXiv: 1503.02531.
  5. Dietterich T.G. Ensemble methods in machine learning. Proc. Int. Workshop on Multiple Classifier Systems. Cagliari, Italy, 2000, pp. 1–15. doi: 10.1007/3-540-45014-9_1
  6. Saon G., Kurata G., Sercu T. et al. English conversational telephone speech recognition by humans and machines. Proc. INTERSPEECH. Stockholm, Sweden, 2017, pp. 132–136. doi: 10.21437/Interspeech.2017-405
  7. Han K.J, Hahm S., Kim B.-H. et al. Deep learning-based telephony speech recognition in the wild. Proc. INTERSPEECH. Stockholm, Sweden, 2017, pp. 1323–1327. doi: 10.21437/Interspeech.2017-1695
  8. Xiong W., Wu L., Alleva F. et al. The Microsoft 2017 conversational speech recognition system. Technical Report MSR-TR-2017-39, 2017. arXiv:1708.06073
  9. Zolnay A., Schluter R., Ney H. Acoustic feature combination for robust speech recognition. Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing. Philadelphia, USA, 2005, pp. I457–I460. doi: 10.1109/ICASSP.2005.1415149
  10. Khokhlov Y., Medennikov I., Romanenko A. et al. The STC keyword search system for OpenKWS 2016 evaluation. Proc. INTERSPEECH. Stockholm, Sweden, 2017, pp. 3602–3606. doi: 10.21437/Interspeech.2017-1212
  11. Tomashenko N.A., Khokhlov Yu.Yu., Larcher A., Estève Ya., Matveev Yu. N. Gaussian mixture models for adaptation of deep neural network acoustic models in automatic speech recognition systems. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2016, vol. 16, no. 6, pp. 1063–1072. (In Russian) doi: 10.17586/2226-1494-2016-16-6-1063-1072
  12. Narang S., Elsen E., Diamos G., Sengupta S. Exploring sparsity in recurrent neural networks. Proc. International Conference on Learning Representations, ICLR. Toulon, France, 2017. arXiv:1704.05119
  13. Bucilua C., Caruana R., Niculescu-Mizil A. Model compression. Proc. 12th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining. NY, 2006, pp. 535–541. doi: 10.1145/1150402.1150464
  14. Povey D., Ghoshal A. et al. The Kaldi speech recognition toolkit. Proc. IEEE Workshop on Automatic Speech Recognition and Understanding, ASRU. Waikoloa, Hawaii, USA, 2011.
  15. Medennikov I.P. Methods, Algorithms and Software for Recognition of Russian Spontaneous Phone Speech. Dis. PhD Eng. Sci. St. Petersburg, Russia, 200 p.
  16. Povey D., Peddinti V., Galvez D. et al. Purely sequence-trained neural networks for ASR based on lattice-free MMI. Proc. INTERSPEECH. San Francisco, USA, 2016, pp. 2751–2755. doi: 10.21437/Interspeech.2016-595
  17. Ravindran S., Demirogulu C., Anderson D.V. Speech recognition using filter-bank features. Proc. 37th Conference on Signals, Systems and Computers. Pacific Grove, USA, 2003, vol. 2, pp. 1900–1903. doi: 10.1109/ACSSC.2003.1292312
  18. Hui Y., Hohmann V., Nadeu C. Acoustic features for speech recognition based on Gammatone filterbank and instantaneous frequency. Speech Communication, 2011, vol. 53, no. 5, pp. 707–715. doi: 10.1016/j.specom.2010.04.008
  19. Hermansky H. Perceptual linear predictive (PLP) analysis of speech. Journal of the Acoustical Society of America, 1990, vol. 87, no. 4, pp. 1738–1752. doi: 10.1121/1.399423
  20. Ghahremani P., BabaAli B., Povey D. at al. A pitch extraction algorithm tuned for automatic speech recognition. Proc. Int. Conf. on Acoustics, Speech and Signal Processing. Florence, Italy, 2014, pp. 2494–2498. doi: 10.1109/ICASSP.2014.6854049
  21. Dehak N., Kenny P., Dehak R. et al. Front-end factor analysis for speaker verification. IEEE Transactions on Audio, Speech and Language Processing, 2011, vol. 19, no. 4, pp. 788–798. doi: 10.1109/TASL.2010.2064307
  22. Medennikov I.P. Speaker-dependent features for spontaneous speech recognition. Scientific and Technical Journal of Information Technologies, Mechanics and Optics, 2016, vol. 16, no. 1, pp. 195–197. (In Russian) doi: 10.17586/2226-1494-2016-16-1-195-197
  23. Ko T., Peddinti V., Povey D., Khudanpur S. Audio augmentation for speech recognition. Proc. INTERSPEECH. Dresden, Germany, 2015, pp. 3586–3589.
  24. Goel V., Byrne W. Minimum Bayes-risk automatic speech recognition. Computer Speech and Language, 2000, vol. 14, no. 2, pp. 115–135. doi: 10.1006/csla.2000.0138
  25. Peddinti V., Povey D., Khudanpur S. A time delay neural network architecture for efficient modeling of long temporal contexts. Proc. INTERSPEECH. Dresden, Germany, 2015, pp. 3214–3218.


Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License
Copyright 2001-2018 ©
Scientific and Technical Journal
of Information Technologies, Mechanics and Optics.
All rights reserved.

Яндекс.Метрика