329 research outputs found

    Spoken command recognition for robotics

    Get PDF
    In this thesis, I investigate spoken command recognition technology for robotics. While high robustness is expected, the distant and noisy conditions in which the system has to operate make the task very challenging. Unlike commercial systems which all rely on a "wake-up" word to initiate the interaction, the pipeline proposed here directly detect and recognizes commands from the continuous audio stream. In order to keep the task manageable despite low-resource conditions, I propose to focus on a limited set of commands, thus trading off flexibility of the system against robustness. Domain and speaker adaptation strategies based on a multi-task regularization paradigm are first explored. More precisely, two different methods are proposed which rely on a tied loss function which penalizes the distance between the output of several networks. The first method considers each speaker or domain as a task. A canonical task-independent network is jointly trained with task-dependent models, allowing both types of networks to improve by learning from one another. While an improvement of 3.2% on the frame error rate (FER) of the task-independent network is obtained, this only partially carried over to the phone error rate (PER), with 1.5% of improvement. Similarly, a second method explored the parallel training of the canonical network with a privileged model having access to i-vectors. This method proved less effective with only 1.2% of improvement on the FER. In order to make the developed technology more accessible, I also investigated the use of a sequence-to-sequence (S2S) architecture for command classification. The use of an attention-based encoder-decoder model reduced the classification error by 40% relative to a strong convolutional neural network (CNN)-hidden Markov model (HMM) baseline, showing the relevance of S2S architectures in such context. In order to improve the flexibility of the trained system, I also explored strategies for few-shot learning, which allow to extend the set of commands with minimum requirements in terms of data. Retraining a model on the combination of original and new commands, I managed to achieve 40.5% of accuracy on the new commands with only 10 examples for each of them. This scores goes up to 81.5% of accuracy with a larger set of 100 examples per new command. An alternative strategy, based on model adaptation achieved even better scores, with 68.8% and 88.4% of accuracy with 10 and 100 examples respectively, while being faster to train. This high performance is obtained at the expense of the original categories though, on which the accuracy deteriorated. Those results are very promising as the methods allow to easily extend an existing S2S model with minimal resources. Finally, a full spoken command recognition system (named iCubrec) has been developed for the iCub platform. The pipeline relies on a voice activity detection (VAD) system to propose a fully hand-free experience. By segmenting only regions that are likely to contain commands, the VAD module also allows to reduce greatly the computational cost of the pipeline. Command candidates are then passed to the deep neural network (DNN)-HMM command recognition system for transcription. The VoCub dataset has been specifically gathered to train a DNN-based acoustic model for our task. Through multi-condition training with the CHiME4 dataset, an accuracy of 94.5% is reached on VoCub test set. A filler model, complemented by a rejection mechanism based on a confidence score, is finally added to the system to reject non-command speech in a live demonstration of the system

    Sequence to sequence learning and its speech applications

    Full text link
    Recurrent Neural Networks (RNNs), which has the attractive properties of modelling sequences, has been dominant in speech field in the recent decades. Convolutional Neural Networks (CNNs) has been shown as an alternative to model sequences because of its capacity of reducing spectral variations and modeling spectral correlations in acoustic features for automatic speech recognition (ASR). Recent work suggests that complex numbers could be used as a richer feature representation than spectrum which may benefit the speech related tasks. In the thesis, we first cover the basic concepts in machine learning, building blocks of deep learning and discuss the popular methods that are capable of doing sequence-to-sequence modelling, specially convolutional neural networks, which is famous as a class of feed-forward nets. We then present two research work related to sequence-to-sequence modelling on speech. We introduce a new approach to address speech recognition with convolutional neural networks which shows the comparable results with their recurrent neural networks counterpart. In addition, we present a new model taking advantage of the representation in the complex domain and define complex convolutions, complex batch-normalization, complex weight initialization strategies. The new model results in state-of-the-art of speech spectrum prediction in a convolutional recurrent setting.Les réseaux neuronaux récurrents (RNN) ont été dominants dans le domaine de la parole au cours des dernières décennies, étant donné leurs propriétés attrayantes de modélisation de séquence. Les réseaux neuronaux convolutionnels (CNN) ont été présentés comme une alternative pour la modélisation de séquences en raison de leur capacité à réduire les variations spectrales et à modéliser les corrélations spectrales dans les caractéristiques acoustiques pour la reconnaissance automatique de la parole (ASR). Des travaux récents suggèrent que les nombres complexes pourraient être utilisés comme une représentation de caractéristique plus riche que le spectre et qui pouvaient donc être bénéfique pour les tâches liées à la parole. Dans la thèse, nous abordons d’abord les concepts de base de l’apprentissage automatique, les blocs de construction de l’apprentissage profond et discutons des méthodes populaires capables de faire des modélisations séquentielles, en particulier des réseaux de neurones convolutionnels, célèbres en tant que réseaux feedfoward. Nous présentons ensuite deux travaux de recherche liés à la modélisation séquence-séquence sur la parole. Premierement, nous introduisons une nouvelle approche pour adresser la reconnaissance de la parole avec des réseaux de neurones convolutionnels qui montre des performances comparables avec leur homologue des réseaux neuronaux récurrents. Deuxièmement, nous présentons un nouveau mo- dèle, tirant parti de la représentation dans le domaine complexe, et définissons des circonvolutions complexes, des stratégies complexes de normalisation par lots et d’initialisation de poids complexes. Le modèle a atteint l’état de l’art de la tâche de prédiction du spectre de la parole dans un cadre récurrent convolutionnel

    Automatic Identification of Addresses: A Systematic Literature Review

    Get PDF
    Cruz, P., Vanneschi, L., Painho, M., & Rita, P. (2022). Automatic Identification of Addresses: A Systematic Literature Review. ISPRS International Journal of Geo-Information, 11(1), 1-27. https://doi.org/10.3390/ijgi11010011 -----------------------------------------------------------------------The work by Leonardo Vanneschi, Marco Painho and Paulo Rita was supported by Fundação para a Ciência e a Tecnologia (FCT) within the Project: UIDB/04152/2020—Centro de Investigação em Gestão de Informação (MagIC). The work by Prof. Leonardo Vanneschi was also partially supported by FCT, Portugal, through funding of project AICE (DSAIPA/DS/0113/2019).Address matching continues to play a central role at various levels, through geocoding and data integration from different sources, with a view to promote activities such as urban planning, location-based services, and the construction of databases like those used in census operations. However, the task of address matching continues to face several challenges, such as non-standard or incomplete address records or addresses written in more complex languages. In order to better understand how current limitations can be overcome, this paper conducted a systematic literature review focused on automated approaches to address matching and their evolution across time. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed, resulting in a final set of 41 papers published between 2002 and 2021, the great majority of which are after 2017, with Chinese authors leading the way. The main findings revealed a consistent move from more traditional approaches to deep learning methods based on semantics, encoder-decoder architectures, and attention mechanisms, as well as the very recent adoption of hybrid approaches making an increased use of spatial constraints and entities. The adoption of evolutionary-based approaches and privacy preserving methods stand as some of the research gaps to address in future studies.publishersversionpublishe

    Deep Spoken Keyword Spotting:An Overview

    Get PDF
    Spoken keyword spotting (KWS) deals with the identification of keywords in audio streams and has become a fast-growing technology thanks to the paradigm shift introduced by deep learning a few years ago. This has allowed the rapid embedding of deep KWS in a myriad of small electronic devices with different purposes like the activation of voice assistants. Prospects suggest a sustained growth in terms of social use of this technology. Thus, it is not surprising that deep KWS has become a hot research topic among speech scientists, who constantly look for KWS performance improvement and computational complexity reduction. This context motivates this paper, in which we conduct a literature review into deep spoken KWS to assist practitioners and researchers who are interested in this technology. Specifically, this overview has a comprehensive nature by covering a thorough analysis of deep KWS systems (which includes speech features, acoustic modeling and posterior handling), robustness methods, applications, datasets, evaluation metrics, performance of deep KWS systems and audio-visual KWS. The analysis performed in this paper allows us to identify a number of directions for future research, including directions adopted from automatic speech recognition research and directions that are unique to the problem of spoken KWS
    • …
    corecore