288 research outputs found

    Discriminative Tandem Features for HMM-based EEG Classification

    Get PDF
    Abstract—We investigate the use of discriminative feature extractors in tandem configuration with generative EEG classification system. Existing studies on dynamic EEG classification typically use hidden Markov models (HMMs) which lack discriminative capability. In this paper, a linear and a non-linear classifier are discriminatively trained to produce complementary input features to the conventional HMM system. Two sets of tandem features are derived from linear discriminant analysis (LDA) projection output and multilayer perceptron (MLP) class-posterior probability, before appended to the standard autoregressive (AR) features. Evaluation on a two-class motor-imagery classification task shows that both the proposed tandem features yield consistent gains over the AR baseline, resulting in significant relative improvement of 6.2% and 11.2 % for the LDA and MLP features respectively. We also explore portability of these features across different subjects. Index Terms- Artificial neural network-hidden Markov models, EEG classification, brain-computer-interface (BCI)

    Improving large vocabulary continuous speech recognition by combining GMM-based and reservoir-based acoustic modeling

    Get PDF
    In earlier work we have shown that good phoneme recognition is possible with a so-called reservoir, a special type of recurrent neural network. In this paper, different architectures based on Reservoir Computing (RC) for large vocabulary continuous speech recognition are investigated. Besides experiments with HMM hybrids, it is shown that a RC-HMM tandem can achieve the same recognition accuracy as a classical HMM, which is a promising result for such a fairly new paradigm. It is also demonstrated that a state-level combination of the scores of the tandem and the baseline HMM leads to a significant improvement over the baseline. A word error rate reduction of the order of 20\% relative is possible

    Combining Multiple Views for Visual Speech Recognition

    Get PDF
    Visual speech recognition is a challenging research problem with a particular practical application of aiding audio speech recognition in noisy scenarios. Multiple camera setups can be beneficial for the visual speech recognition systems in terms of improved performance and robustness. In this paper, we explore this aspect and provide a comprehensive study on combining multiple views for visual speech recognition. The thorough analysis covers fusion of all possible view angle combinations both at feature level and decision level. The employed visual speech recognition system in this study extracts features through a PCA-based convolutional neural network, followed by an LSTM network. Finally, these features are processed in a tandem system, being fed into a GMM-HMM scheme. The decision fusion acts after this point by combining the Viterbi path log-likelihoods. The results show that the complementary information contained in recordings from different view angles improves the results significantly. For example, the sentence correctness on the test set is increased from 76% for the highest performing single view (3030^\circ) to up to 83% when combining this view with the frontal and 6060^\circ view angles

    Using multiple visual tandem streams in audio-visual speech recognition

    Get PDF
    The method which is called the "tandem approach" in speech recognition has been shown to increase performance by using classifier posterior probabilities as observations in a hidden Markov model. We study the effect of using visual tandem features in audio-visual speech recognition using a novel setup which uses multiple classifiers to obtain multiple visual tandem features. We adopt the approach of multi-stream hidden Markov models where visual tandem features from two different classifiers are considered as additional streams in the model. It is shown in our experiments that using multiple visual tandem features improve the recognition accuracy in various noise conditions. In addition, in order to handle asynchrony between audio and visual observations, we employ coupled hidden Markov models and obtain improved performance as compared to the synchronous model

    Advances in All-Neural Speech Recognition

    Full text link
    This paper advances the design of CTC-based all-neural (or end-to-end) speech recognizers. We propose a novel symbol inventory, and a novel iterated-CTC method in which a second system is used to transform a noisy initial output into a cleaner version. We present a number of stabilization and initialization methods we have found useful in training these networks. We evaluate our system on the commonly used NIST 2000 conversational telephony test set, and significantly exceed the previously published performance of similar systems, both with and without the use of an external language model and decoding technology

    Handwriting recognition by using deep learning to extract meaningful features

    Full text link
    [EN] Recent improvements in deep learning techniques show that deep models can extract more meaningful data directly from raw signals than conventional parametrization techniques, making it possible to avoid specific feature extraction in the area of pattern recognition, especially for Computer Vision or Speech tasks. In this work, we directly use raw text line images by feeding them to Convolutional Neural Networks and deep Multilayer Perceptrons for feature extraction in a Handwriting Recognition system. The proposed recognition system, based on Hidden Markov Models that are hybridized with Neural Networks, has been tested with the IAM Database, achieving a considerable improvement.Work partially supported by the Spanish MINECO and FEDER founds under project TIN2017-85854-C4-2-R.Pastor Pellicer, J.; Castro-Bleda, MJ.; España Boquera, S.; Zamora-Martinez, FJ. (2019). Handwriting recognition by using deep learning to extract meaningful features. AI Communications. 32(2):101-112. https://doi.org/10.3233/AIC-170562S101112322Baldi, P., Brunak, S., Frasconi, P., Soda, G., & Pollastri, G. (1999). Exploiting the past and the future in protein secondary structure prediction. Bioinformatics, 15(11), 937-946. doi:10.1093/bioinformatics/15.11.937LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444. doi:10.1038/nature14539Bertolami, R., & Bunke, H. (2008). Hidden Markov model-based ensemble methods for offline handwritten text line recognition. Pattern Recognition, 41(11), 3452-3460. doi:10.1016/j.patcog.2008.04.003Bianne-Bernard, A.-L., Menasri, F., Mohamad, R. A.-H., Mokbel, C., Kermorvant, C., & Likforman-Sulem, L. (2011). Dynamic and Contextual Information in HMM Modeling for Handwritten Word Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(10), 2066-2080. doi:10.1109/tpami.2011.22C.M. Bishop, Neural networks for pattern recognition, Oxford University Press, 1995.T. Bluche, H. Ney and C. Kermorvant, Feature extraction with convolutional neural networks for handwritten word recognition, in: 12th International Conference on Document Analysis and Recognition (ICDAR), 2013, pp. 285–289.T. Bluche, H. Ney and C. Kermorvant, Tandem HMM with convolutional neural network for handwritten word recognition, in: 38th International Conference on Acoustics Speech and Signal Processing (ICASSP), 2013, pp. 2390–2394.T. Bluche, H. Ney and C. Kermorvant, A comparison of sequence-trained deep neural networks and recurrent neural networks optical modeling for handwriting recognition, in: Slsp-2014, 2014, pp. 1–12.H. Bourlard and N. Morgan, Connectionist Speech Recognition – A Hybrid Approach, Series in Engineering and Computer Science, Vol. 247, Kluwer Academic, 1994.Bozinovic, R. M., & Srihari, S. N. (1989). Off-line cursive script word recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(1), 68-83. doi:10.1109/34.23114H. Bunke, Recognition of cursive roman handwriting – past, present and future, in: International Conference on Document Analysis and Recognition, Vol. 1, 2003, pp. 448–459.E. Caillault, C. Viard-Gaudin and A. Rahim Ahmad, MS-TDNN with global discriminant trainings, in: International Conference on Document Analysis and Recognition (ICDAR), 2005, pp. 856–860.P. Doetsch, M. Kozielski and H. Ney, Fast and robust training of recurrent neural networks for offline handwriting recognition, in: 14th International Conference on Frontiers in Handwriting Recognition (ICFHR), 2014, pp. 279–284.P. Dreuw, P. Doetsch, C. Plahl and H. Ney, Hierarchical hybrid MLP/HMM or rather MLP features for a discriminatively trained Gaussian HMM: A comparison for offline handwriting recognition, in: International Conference on Image Processing (ICIP), 2011, pp. 3541–3544.Dreuw, P., Heigold, G., & Ney, H. (2011). Confidence- and margin-based MMI/MPE discriminative training for off-line handwriting recognition. International Journal on Document Analysis and Recognition (IJDAR), 14(3), 273-288. doi:10.1007/s10032-011-0160-xEspaña-Boquera, S., Castro-Bleda, M. J., Gorbe-Moya, J., & Zamora-Martinez, F. (2011). Improving Offline Handwritten Text Recognition with Hybrid HMM/ANN Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(4), 767-779. doi:10.1109/tpami.2010.141A. Graves, S. Fernández, F. Gomez and J. Schmidhuber, Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks, in: 23rd International Conference on Machine Learning (ICML), ACM, 2006, pp. 369–376.A. Graves and N. Jaitly, Towards end-to-end speech recognition with recurrent neural networks, in: 31st International Conference on Machine Learning (ICML), 2014, pp. 1764–1772.Graves, A., Liwicki, M., Fernandez, S., Bertolami, R., Bunke, H., & Schmidhuber, J. (2009). A Novel Connectionist System for Unconstrained Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(5), 855-868. doi:10.1109/tpami.2008.137A. Graves and J. Schmidhuber, Framewise phoneme classification with bidirectional LSTM networks, in: International Joint Conference on Neural Networks (IJCNN), Vol. 4, 2005, pp. 2047–2052.A. Graves and J. Schmidhuber, Offline handwriting recognition with multidimensional recurrent neural networks, in: Advances in Neural Information Processing Systems (NIPS), 2009, pp. 545–552.F. Grézl, M. Karafiát, S. Kontár and J. Černocký, Probabilistic and bottle-neck features for LVCSR of meetings, in: International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vol. 4, 2007.Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8), 1735-1780. doi:10.1162/neco.1997.9.8.1735Impedovo, S. (2014). More than twenty years of advancements on Frontiers in handwriting recognition. Pattern Recognition, 47(3), 916-928. doi:10.1016/j.patcog.2013.05.027Jaeger, S., Manke, S., Reichert, J., & Waibel, A. (2001). Online handwriting recognition: the NPen++ recognizer. International Journal on Document Analysis and Recognition, 3(3), 169-180. doi:10.1007/pl00013559M. Kozielski, P. Doetsch and H. Ney, Improvements in RWTH’s system for off-line handwriting recognition, in: 12th International Conference on Document Analysis and Recognition (ICDAR), IEEE, 2013, pp. 935–939.A. Krizhevsky, I. Sutskever and G.E. Hinton, ImageNet classification with deep convolutional neural networks, in: Advances in Neural Information Processing Systems (NIPS), F. Pereira, C.J.C. Burges, L. Bottou and K.Q. Weinberger, eds, Vol. 25, Curran Associates, Inc., 2012, pp. 1097–1105.Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324. doi:10.1109/5.726791M. Liwicki, A. Graves, H. Bunke and J. Schmidhuber, A novel approach to on-line handwriting recognition based on bidirectional long short-term memory networks, in: 9th International Conference on Document Analysis and Recognition (ICDAR), 2007, pp. 367–371.Marti, U.-V., & Bunke, H. (2002). The IAM-database: an English sentence database for offline handwriting recognition. International Journal on Document Analysis and Recognition, 5(1), 39-46. doi:10.1007/s100320200071S. Marukatat, T. Artieres, R. Gallinari and B. Dorizzi, Sentence recognition through hybrid neuro-Markovian modeling, in: 6th International Conference on Document Analysis and Recognition (ICDAR), 2001, pp. 731–735.F.J. Och, Minimum error rate training in statistical machine translation, in: 41st Annual Meeting on Association for Computational Linguistics, ACL’03, Vol. 1, 2003, pp. 160–167.J. Pastor-Pellicer, S. España-Boquera, M.J. Castro-Bleda and F. Zamora-Martínez, A combined convolutional neural network and dynamic programming approach for text line normalization, in: 13th International Conference on Document Analysis and Recognition (ICDAR), 2015.J. Pastor-Pellicer, S. España-Boquera, F. Zamora-Martínez, M. Zeshan Afzal and M.J. Castro-Bleda, Insights on the use of convolutional neural networks for document image binarization, in: The International Work-Conference on Artificial Neural Networks, Vol. 9095, 2015, pp. 115–126.V. Pham, T. Bluche, C. Kermorvant and J. Louradour, Dropout improves recurrent neural networks for handwriting recognition, in: International Conference on Frontiers in Handwriting Recognition (ICFHR), 2014, pp. 285–290.Plamondon, R., & Srihari, S. N. (2000). Online and off-line handwriting recognition: a comprehensive survey. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(1), 63-84. doi:10.1109/34.824821Plötz, T., & Fink, G. A. (2009). Markov models for offline handwriting recognition: a survey. International Journal on Document Analysis and Recognition (IJDAR), 12(4), 269-298. doi:10.1007/s10032-009-0098-4A. Poznanski and L. Wolf, CNN-N-gram for HandwritingWord recognition, in: Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2305–2314.Puigcerver, J. (2017). Are Multidimensional Recurrent Layers Really Necessary for Handwritten Text Recognition? 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR). doi:10.1109/icdar.2017.20L.R. Rabiner, A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition, 1989.Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., … Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3), 211-252. doi:10.1007/s11263-015-0816-yT.N. Sainath, B. Kingsbury and B. Ramabhadran, Auto-encoder bottleneck features using deep belief networks, in: International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2012, pp. 4153–4156.Sayre, K. M. (1973). Machine recognition of handwritten words: A project report. Pattern Recognition, 5(3), 213-228. doi:10.1016/0031-3203(73)90044-7Schenkel, M., Guyon, I., & Henderson, D. (1995). On-line cursive script recognition using time-delay neural networks and hidden Markov models. Machine Vision and Applications, 8(4), 215-223. doi:10.1007/bf01219589Schuster, M., & Paliwal, K. K. (1997). Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11), 2673-2681. doi:10.1109/78.650093A.W. Senior and A.J. Robinson, An off-line cursive handwriting recognition system, in: IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, 1998, pp. 309–321.E. Singer and R.P. Lippman, A speech recognizer using radial basis function neural networks in an HMM framework, in: International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Vol. 1, IEEE, 1992, pp. 629–632.J. Stadermann, A hybrid SVM/HMM acoustic modeling approach to automatic speech recognition, in: International Conference on Spoken Language Processing (ICSLP), 2004.A. Stolcke, SRILM: An extensible language modeling toolkit, in: International Conference on Spoken Language Processing (ICSLP), 2002, pp. 901–904.C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke and A. Rabinovich, Going deeper with convolutions, in: Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1–12.TOSELLI, A. H., JUAN, A., GONZÁLEZ, J., SALVADOR, I., VIDAL, E., CASACUBERTA, F., … NEY, H. (2004). INTEGRATED HANDWRITING RECOGNITION AND INTERPRETATION USING FINITE-STATE MODELS. International Journal of Pattern Recognition and Artificial Intelligence, 18(04), 519-539. doi:10.1142/s0218001404003344Toselli, A. H., Romero, V., Pastor, M., & Vidal, E. (2010). Multimodal interactive transcription of text images. Pattern Recognition, 43(5), 1814-1825. doi:10.1016/j.patcog.2009.11.019J.M. Vilar, Efficient computation of confidence intervals for word error rates, in: International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2008, pp. 5101–5104.Vinciarelli, A. (2002). A survey on off-line Cursive Word Recognition. Pattern Recognition, 35(7), 1433-1446. doi:10.1016/s0031-3203(01)00129-7Voigtlaender, P., Doetsch, P., & Ney, H. (2016). Handwriting Recognition with Large Multidimensional Long Short-Term Memory Recurrent Neural Networks. 2016 15th International Conference on Frontiers in Handwriting Recognition (ICFHR). doi:10.1109/icfhr.2016.0052E. Wang, Q. Zhang, B. Shen, G. Zhang, X. Lu, Q. Wu and Y. Wang, Intel math kernel library, in: High-Performance Computing on the Intel® Xeon Phi™, Springer, 2014, pp. 167–188.F. Zamora-Martínez et al., April-ANN Toolkit, a Pattern Recognizer in Lua, Artificial Neural Networks Module, 2013, https://github.com/pakozm/ [github.com]april-ann.Zamora-Martínez, F., Frinken, V., España-Boquera, S., Castro-Bleda, M. J., Fischer, A., & Bunke, H. (2014). Neural network language models for off-line handwriting recognition. Pattern Recognition, 47(4), 1642-1652. doi:10.1016/j.patcog.2013.10.020Zeyer, A., Beck, E., Schlüter, R., & Ney, H. (2017). CTC in the Context of Generalized Full-Sum HMM Training. Interspeech 2017. doi:10.21437/interspeech.2017-107
    corecore