102 research outputs found

    Essential Speech and Language Technology for Dutch: Results by the STEVIN-programme

    Get PDF
    Computational Linguistics; Germanic Languages; Artificial Intelligence (incl. Robotics); Computing Methodologie

    Towards Cognizant Hearing Aids: Modeling of Content, Affect and Attention

    Get PDF

    Robust speech recognition with spectrogram factorisation

    Get PDF
    Communication by speech is intrinsic for humans. Since the breakthrough of mobile devices and wireless communication, digital transmission of speech has become ubiquitous. Similarly distribution and storage of audio and video data has increased rapidly. However, despite being technically capable to record and process audio signals, only a fraction of digital systems and services are actually able to work with spoken input, that is, to operate on the lexical content of speech. One persistent obstacle for practical deployment of automatic speech recognition systems is inadequate robustness against noise and other interferences, which regularly corrupt signals recorded in real-world environments. Speech and diverse noises are both complex signals, which are not trivially separable. Despite decades of research and a multitude of different approaches, the problem has not been solved to a sufficient extent. Especially the mathematically ill-posed problem of separating multiple sources from a single-channel input requires advanced models and algorithms to be solvable. One promising path is using a composite model of long-context atoms to represent a mixture of non-stationary sources based on their spectro-temporal behaviour. Algorithms derived from the family of non-negative matrix factorisations have been applied to such problems to separate and recognise individual sources like speech. This thesis describes a set of tools developed for non-negative modelling of audio spectrograms, especially involving speech and real-world noise sources. An overview is provided to the complete framework starting from model and feature definitions, advancing to factorisation algorithms, and finally describing different routes for separation, enhancement, and recognition tasks. Current issues and their potential solutions are discussed both theoretically and from a practical point of view. The included publications describe factorisation-based recognition systems, which have been evaluated on publicly available speech corpora in order to determine the efficiency of various separation and recognition algorithms. Several variants and system combinations that have been proposed in literature are also discussed. The work covers a broad span of factorisation-based system components, which together aim at providing a practically viable solution to robust processing and recognition of speech in everyday situations

    Feature enhancement of reverberant speech by distribution matching and non-negative matrix factorization

    Get PDF
    This paper describes a novel two-stage dereverberation feature enhancement method for noise-robust automatic speech recognition. In the first stage, an estimate of the dereverberated speech is generated by matching the distribution of the observed reverberant speech to that of clean speech, in a decorrelated transformation domain that has a long temporal context in order to address the effects of reverberation. The second stage uses this dereverberated signal as an initial estimate within a non-negative matrix factorization framework, which jointly estimates a sparse representation of the clean speech signal and an estimate of the convolutional distortion. The proposed feature enhancement method, when used in conjunction with automatic speech recognizer back-end processing, is shown to improve the recognition performance compared to three other state-of-the-art techniques

    Recognition of the Numbers in the Polish Language, Journal of Telecommunications and Information Technology, 2013, nr 4

    Get PDF
    Automatic Speech Recognition is one of the hottest research and application problems in today’s ICT technologies. Huge progress in the development of the intelligent mobile systems needs an implementation of the new services, where users can communicate with devices by sending audio commands. Those systems must be additionally integrated with the highly distributed infrastructures such as computational and mobile clouds, Wireless Sensor Networks (WSNs), and many others. This paper presents the recent research results for the recognition of the separate words and words in short contexts (limited to the numbers) articulated in the Polish language. Compressed Sensing Theory (CST) is applied for the first time as a methodology of speech recognition. The effectiveness of the proposed methodology is justified in numerical tests for both separate words and short sentences

    Multi-view machine learning methods to uncover brain-behaviour associations

    Get PDF
    The heterogeneity of neurological and mental disorders has been a key confound in disease understanding and treatment outcome prediction, as the study of patient populations typically includes multiple subgroups that do not align with the diagnostic categories. The aim of this thesis is to investigate and extend classical multivariate methods, such as Canonical Correlation Analysis (CCA), and latent variable models, e.g., Group Factor Analysis (GFA), to uncover associations between brain and behaviour that may characterize patient populations and subgroups of patients. In the first contribution of this thesis, we applied CCA to investigate brain-behaviour associations in a sample of healthy and depressed adolescents and young adults. We found two positive-negative brain-behaviour modes of covariation, capturing externalisation/ internalisation symptoms and well-being/distress. In the second contribution of the thesis, I applied sparse CCA to the same dataset to present a regularised approach to investigate brain-behaviour associations in high dimensional datasets. Here, I compared two approaches to optimise the regularisation parameters of sparse CCA and showed that the choice of the optimisation strategy might have an impact on the results. In the third contribution, I extended the GFA model to mitigate some limitations of CCA, such as handling missing data. I applied the extended GFA model to investigate links between high dimensional brain imaging and non-imaging data from the Human Connectome Project, and predict non-imaging measures from brain functional connectivity. The results were consistent between complete and incomplete data, and replicated previously reported findings. In the final contribution of this thesis, I proposed two extensions of GFA to uncover brain behaviour associations that characterize subgroups of subjects in an unsupervised and supervised way, as well as explore within-group variability at the individual level. These extensions were demonstrated using a dataset of patients with genetic frontotemporal dementia. In summary, this thesis presents multi-view methods that can be used to deepen our understanding about the latent dimensions of disease in mental/neurological disorders and potentially enable patient stratification

    The second 'CHiME' Speech Separation and Recognition Challenge: Datasets, tasks and baselines

    Get PDF
    International audienceDistant-microphone automatic speech recognition (ASR) remains a challenging goal in everyday environments involving multiple background sources and reverberation. This paper is intended to be a reference on the 2nd 'CHiME' Challenge, an initiative designed to analyze and evaluate the performance of ASR systems in a real-world domestic environment. Two separate tracks have been proposed: a small-vocabulary task with small speaker movements and a medium-vocabulary task without speaker movements. We discuss the rationale for the challenge and provide a detailed description of the datasets, tasks and baseline performance results for each track

    The PASCAL CHiME Speech Separation and Recognition Challenge

    Get PDF
    International audienceDistant microphone speech recognition systems that operate with humanlike robustness remain a distant goal. The key difficulty is that operating in everyday listening conditions entails processing a speech signal that is reverberantly mixed into a noise background composed of multiple competing sound sources. This paper describes a recent speech recognition evaluation that was designed to bring together researchers from multiple communities in order to foster novel approaches to this problem. The task was to identify keywords from sentences reverberantly mixed into audio backgrounds binaurally-recorded in a busy domestic environment. The challenge was designed to model the essential difficulties of multisource environment problem while remaining on a scale that would make it accessible to a wide audience. Compared to previous ASR evaluation a particular novelty of the task is that the utterances to be recognised were provided in a continuous audio background rather than as pre-segmented utterances thus allowing a range of background modelling techniques to be employed. The challenge attracted thirteen submissions. This paper describes the challenge problem, provides an overview of the systems that were entered and provides a comparison alongside both a baseline recognition system and human performance. The paper discusses insights gained from the challenge and lessons learnt for the design of future such evaluations

    Design of reservoir computing systems for the recognition of noise corrupted speech and handwriting

    Get PDF
    corecore