851 research outputs found

    Multi-talker Speech Separation with Utterance-level Permutation Invariant Training of Deep Recurrent Neural Networks

    Full text link
    In this paper we propose the utterance-level Permutation Invariant Training (uPIT) technique. uPIT is a practically applicable, end-to-end, deep learning based solution for speaker independent multi-talker speech separation. Specifically, uPIT extends the recently proposed Permutation Invariant Training (PIT) technique with an utterance-level cost function, hence eliminating the need for solving an additional permutation problem during inference, which is otherwise required by frame-level PIT. We achieve this using Recurrent Neural Networks (RNNs) that, during training, minimize the utterance-level separation error, hence forcing separated frames belonging to the same speaker to be aligned to the same output stream. In practice, this allows RNNs, trained with uPIT, to separate multi-talker mixed speech without any prior knowledge of signal duration, number of speakers, speaker identity or gender. We evaluated uPIT on the WSJ0 and Danish two- and three-talker mixed-speech separation tasks and found that uPIT outperforms techniques based on Non-negative Matrix Factorization (NMF) and Computational Auditory Scene Analysis (CASA), and compares favorably with Deep Clustering (DPCL) and the Deep Attractor Network (DANet). Furthermore, we found that models trained with uPIT generalize well to unseen speakers and languages. Finally, we found that a single model, trained with uPIT, can handle both two-speaker, and three-speaker speech mixtures

    Single-Microphone Speech Enhancement and Separation Using Deep Learning

    Get PDF
    The cocktail party problem comprises the challenging task of understanding a speech signal in a complex acoustic environment, where multiple speakers and background noise signals simultaneously interfere with the speech signal of interest. A signal processing algorithm that can effectively increase the speech intelligibility and quality of speech signals in such complicated acoustic situations is highly desirable. Especially for applications involving mobile communication devices and hearing assistive devices. Due to the re-emergence of machine learning techniques, today, known as deep learning, the challenges involved with such algorithms might be overcome. In this PhD thesis, we study and develop deep learning-based techniques for two sub-disciplines of the cocktail party problem: single-microphone speech enhancement and single-microphone multi-talker speech separation. Specifically, we conduct in-depth empirical analysis of the generalizability capability of modern deep learning-based single-microphone speech enhancement algorithms. We show that performance of such algorithms is closely linked to the training data, and good generalizability can be achieved with carefully designed training data. Furthermore, we propose uPIT, a deep learning-based algorithm for single-microphone speech separation and we report state-of-the-art results on a speaker-independent multi-talker speech separation task. Additionally, we show that uPIT works well for joint speech separation and enhancement without explicit prior knowledge about the noise type or number of speakers. Finally, we show that deep learning-based speech enhancement algorithms designed to minimize the classical short-time spectral amplitude mean squared error leads to enhanced speech signals which are essentially optimal in terms of STOI, a state-of-the-art speech intelligibility estimator.Comment: PhD Thesis. 233 page

    Single-Microphone Speech Enhancement and Separation Using Deep Learning

    Get PDF

    I hear you eat and speak: automatic recognition of eating condition and food type, use-cases, and impact on ASR performance

    Get PDF
    We propose a new recognition task in the area of computational paralinguistics: automatic recognition of eating conditions in speech, i. e., whether people are eating while speaking, and what they are eating. To this end, we introduce the audio-visual iHEARu-EAT database featuring 1.6 k utterances of 30 subjects (mean age: 26.1 years, standard deviation: 2.66 years, gender balanced, German speakers), six types of food (Apple, Nectarine, Banana, Haribo Smurfs, Biscuit, and Crisps), and read as well as spontaneous speech, which is made publicly available for research purposes. We start with demonstrating that for automatic speech recognition (ASR), it pays off to know whether speakers are eating or not. We also propose automatic classification both by brute-forcing of low-level acoustic features as well as higher-level features related to intelligibility, obtained from an Automatic Speech Recogniser. Prediction of the eating condition was performed with a Support Vector Machine (SVM) classifier employed in a leave-one-speaker-out evaluation framework. Results show that the binary prediction of eating condition (i. e., eating or not eating) can be easily solved independently of the speaking condition; the obtained average recalls are all above 90%. Low-level acoustic features provide the best performance on spontaneous speech, which reaches up to 62.3% average recall for multi-way classification of the eating condition, i. e., discriminating the six types of food, as well as not eating. The early fusion of features related to intelligibility with the brute-forced acoustic feature set improves the performance on read speech, reaching a 66.4% average recall for the multi-way classification task. Analysing features and classifier errors leads to a suitable ordinal scale for eating conditions, on which automatic regression can be performed with up to 56.2% determination coefficient

    Deep neural network techniques for monaural speech enhancement: state of the art analysis

    Full text link
    Deep neural networks (DNN) techniques have become pervasive in domains such as natural language processing and computer vision. They have achieved great success in these domains in task such as machine translation and image generation. Due to their success, these data driven techniques have been applied in audio domain. More specifically, DNN models have been applied in speech enhancement domain to achieve denosing, dereverberation and multi-speaker separation in monaural speech enhancement. In this paper, we review some dominant DNN techniques being employed to achieve speech separation. The review looks at the whole pipeline of speech enhancement from feature extraction, how DNN based tools are modelling both global and local features of speech and model training (supervised and unsupervised). We also review the use of speech-enhancement pre-trained models to boost speech enhancement process. The review is geared towards covering the dominant trends with regards to DNN application in speech enhancement in speech obtained via a single speaker.Comment: conferenc

    "Can you hear me now?":Automatic assessment of background noise intrusiveness and speech intelligibility in telecommunications

    Get PDF
    This thesis deals with signal-based methods that predict how listeners perceive speech quality in telecommunications. Such tools, called objective quality measures, are of great interest in the telecommunications industry to evaluate how new or deployed systems affect the end-user quality of experience. Two widely used measures, ITU-T Recommendations P.862 âPESQâ and P.863 âPOLQAâ, predict the overall listening quality of a speech signal as it would be rated by an average listener, but do not provide further insight into the composition of that score. This is in contrast to modern telecommunication systems, in which components such as noise reduction or speech coding process speech and non-speech signal parts differently. Therefore, there has been a growing interest for objective measures that assess different quality features of speech signals, allowing for a more nuanced analysis of how these components affect quality. In this context, the present thesis addresses the objective assessment of two quality features: background noise intrusiveness and speech intelligibility. The perception of background noise is investigated with newly collected datasets, including signals that go beyond the traditional telephone bandwidth, as well as Lombard (effortful) speech. We analyze listener scores for noise intrusiveness, and their relation to scores for perceived speech distortion and overall quality. We then propose a novel objective measure of noise intrusiveness that uses a sparse representation of noise as a model of high-level auditory coding. The proposed approach is shown to yield results that highly correlate with listener scores, without requiring training data. With respect to speech intelligibility, we focus on the case where the signal is degraded by strong background noises or very low bit-rate coding. Considering that listeners use prior linguistic knowledge in assessing intelligibility, we propose an objective measure that works at the phoneme level and performs a comparison of phoneme class-conditional probability estimations. The proposed approach is evaluated on a large corpus of recordings from public safety communication systems that use low bit-rate coding, and further extended to the assessment of synthetic speech, showing its applicability to a large range of distortion types. The effectiveness of both measures is evaluated with standardized performance metrics, using corpora that follow established recommendations for subjective listening tests

    Efficient Acquisition and Denoising of Full-Range Event-Related Potentials Following Transient Stimulation of the Auditory Pathway

    Get PDF
    This body of work relates to recent advances in the field of human auditory event-related potentials (ERP), specifically the fast, deconvolution-based ERP acquisition as well as single-response based preprocessing, denoising and subsequent analysis methods. Its goal is the contribution of a cohesive set of methods facilitating the fast, reliable acquisition of the whole electrophysiological response generated by the auditory pathway from the brainstem to the cortex following transient acoustical stimulation. The present manuscript is divided into three sequential areas of investigation : First, the general feasibility of simultaneously acquiring auditory brainstem, middle-latency and late ERP single responses is demonstrated using recordings from 15 normal hearing subjects. Favourable acquisition parameters (i.e., sampling rate, bandpass filter settings and interstimulus intervals) are established, followed by signal analysis of the resulting ERP in terms of their dominant intrinsic scales to determine the properties of an optimal signal representation with maximally reduced sample count by means of nonlinear resampling on a logarithmic timebase. This way, a compression ratio of 16.59 is achieved. Time-scale analysis of the linear-time and logarithmic-time ERP single responses is employed to demonstrate that no important information is lost during compressive resampling, which is additionally supported by a comparative evaluation of the resulting average waveforms - here, all prominent waves remain visible, with their characteristic latencies and amplitudes remaining essentially unaffected by the resampling process. The linear-time and resampled logarithmic-time signal representations are comparatively investigated regarding their susceptibility to the types of physiological and technical noise frequently contaminating ERP recordings. While in principle there already exists a plethora of well-investigated approaches towards the denoising of ERP single-response representations to improve signal quality and/or reduce necessary aquisition times, the substantially altered noise characteristics of the obtained, resampled logarithmic-time single response representations as opposed to their linear-time equivalent necessitates a reevaluation of the available methods on this type of data. Additionally, two novel, efficient denoising algorithms based on transform coefficient manipulation in the sinogram domain and on an analytic, discrete wavelet filterbank are proposed and subjected to a comparative performance evaluation together with two established denoising methods. To facilitate a thorough comparison, the real-world ERP dataset obtained in the first part of this work is employed alongside synthetic data generated using a phenomenological ERP model evaluated at different signal-to-noise ratios (SNR), with individual gains in multiple outcome metrics being used to objectively assess algorithm performances. Results suggest the proposed denoising algorithms to substantially outperform the state-of-the-art methods in terms of the employed outcome metrics as well as their respective processing times. Furthermore, an efficient stimulus sequence optimization method for use with deconvolution-based ERP acquisition methods is introduced, which achieves consistent noise attenuation within a broad designated frequency range. A novel stimulus presentation paradigm for the fast, interleaved acquisition of auditory brainstem, middle-latency and late responses featuring alternating periods of optimized, high-rate deconvolution sequences and subsequent low-rate stimulation is proposed and investigated in 20 normal hearing subjects. Deconvolved sequence responses containing early and middle-latency ERP components are fused with subsequent late responses using a time-frequency resolved weighted averaging method based on cross-trial regularity, yielding a uniform SNR of the full-range auditory ERP across investigated timescales. Obtained average ERP waveforms exhibit morphologies consistent with both literature values and the reference recordings obtained in the first part of this manuscript, with all prominent waves being visible in the grand average waveforms. The novel stimulation approach cuts acquisition time by a factor of 3.4 while at the same time yielding a substantial gain in the SNR of obtained ERP data. Results suggest the proposed interleaved stimulus presentation and associated postprocessing methodology to be suitable for the fast, reliable extraction of full-range neural correlates of auditory processing in future studies.Diese Arbeit steht im Zusammenhang mit aktuellen Entwicklungen auf dem Gebiet der ereigniskorrelierten Potentiale (EKP) des humanen auditorischen Systems, insbesondere der schnellen, entfaltungsbasierten EKP-Aufzeichnung sowie einzelantwortbasierten Vorverarbeitungs-, Entrauschungs- und nachgelagerten Analysemethoden. Ziel ist die Bereitstellung eines vollständigen Methodensatzes, der eine schnelle, zuverlässige Erfassung der gesamten elektrophysiologischen Aktivität entlang der Hörbahn vom Hirnstamm bis zum Cortex ermöglicht, die als Folge transienter akustischer Stimulation auftritt. Das vorliegende Manuskript gliedert sich in drei aufeinander aufbauende Untersuchungsbereiche : Zunächst wird die generelle Machbarkeit der gleichzeitigen Aufzeichnung von Einzelantworten der auditorischen Hirnstammpotentiale zusammen mit mittelspäten und späten EKP anhand von Referenzmessungen an 15 normalhörenden Probanden demonstriert. Es werden hierzu geeignete Erfassungsparameter (Abtastrate, Bandpassfiltereinstellungen und Interstimulusintervalle) ermittelt, gefolgt von einer Signalanalyse der resultierenden EKP im Hinblick auf deren dominante intrinsische Skalen, um auf dieser Grundlage die Eigenschaften einer optimalen Signaldarstellung mit maximal reduzierter Anzahl an Abtastpunkten zu bestimmen, die durch nichtlineare Neuabtastung auf eine logarithmische Zeitbasis realisiert wird. Hierbei wird ein Kompressionsverhältnis von 16.59 erzielt. Zeit-Skalen-Analysen der uniform und logarithmisch abgetasteten EKP-Einzelantworten zeigen, dass bei der kompressiven Neuabtastung keine relevante Information verloren geht, was durch eine vergleichende Auswertung der resultierenden, gemittelten Wellenformen zusätzlich gestützt wird - alle prominenten Wellen bleiben sichtbar und sind hinsichtlich ihrer charakteristischen Latenzen und Amplituden von der Neuabtastung weitgehend unbeeinflusst. Die uniforme und logarithmische Signalrepräsentation werden hinsichtlich ihrer Anfälligkeit für die üblicherweise bei der EKP-Aufzeichnung auftretenden physiologischen und technischen Störquellen vergleichend untersucht. Obwohl bereits eine Fülle von gut etablierten Ansätzen für die Entrauschung von EKP-Einzelantwortdarstellungen zur Verbesserung der Signalqualität und/oder zur Reduktion der benötigten Erfassungszeiten existiert, erfordern die wesentlich veränderten Störeigenschaften der vorliegenden, logarithmisch abgetasteten Einzelantwortdarstellungen im Gegensatz zu ihrem uniformen Äquivalent eine Neubewertung der verfügbaren Methoden für diese Art von Daten. Darüber hinaus werden zwei neuartige, effiziente Entrauschungsalgorithmen geboten, die auf der Koeffizientenmanipulation einer Sinogramm-Repräsentation bzw. einer analytischen, diskreten Wavelet-Zerlegung der Einzelantworten basieren und gemeinsam mit zwei etablierten Entrauschungsmethoden einer vergleichenden Leistungsbewertung unterzogen werden. Um einen umfassenden Vergleich zu ermöglichen, werden der im ersten Teil dieser Arbeit erhaltene EKP-Messdatensatz sowie synthetischen Daten eingesetzt, die mithilfe eines phänomenologischen EKP-Modells bei verschiedenen Signal-Rausch-Abständen (SRA) erzeugt wurden, wobei die individuellen Anstiege in mehreren Zielmetriken zur objektiven Bewertung der Performanz herangezogen werden. Die erhaltenen Ergebnisse deuten darauf hin, dass die vorgeschlagenen Entrauschungsalgorithmen die etablierten Methoden sowohl in den eingesetzten Zielmetriken als auch mit Blick auf die Laufzeiten deutlich übertreffen. Weiterhin wird ein effizientes Reizsequenzoptimierungsverfahren für den Einsatz mit entfaltungsbasierten EKP-Aufzeichnungsmethoden vorgestellt, das eine konsistente Rauschunterdrückung innerhalb eines breiten Frequenzbands erreicht. Ein neuartiges Stimulus-Präsentationsparadigma für die schnelle, verschachtelte Erfassung auditorischer Hirnstammpotentiale, mittlelspäter und später Antworten durch alternierende Darbietung von optimierten, dichter Stimulussequenzen und nachgelagerter, langsamer Einzelstimulation wird eingeführt und in 20 normalhörenden Probanden evaluiert. Entfaltete Sequenzantworten, die frühe und mittlere EKP enthalten, werden mit den nachfolgenden späten Antworten fusioniert, wobei eine Zeit-Frequenz-aufgelöste, gewichtete Mittelung unter Berücksichtigung von Regularität über Einzelantworten hinweg zum Einsatz kommt. Diese erreicht einheitliche SRA der resultierenden EKP-Signale über alle untersuchten Zeitskalen hinweg. Die erhaltenen, gemittelten EKP-Wellenformen weisen Morphologien auf, die sowohl mit einschlägigen Literaturwerten als auch mit den im ersten Teil dieses Manuskripts erhaltenen Referenzaufnahmen konsistent sind, wobei alle markanten Wellen deutlich in den Gesamtmittelwerten sichtbar sind. Das neuartige Stimulationsparadigma verkürzt die Erfassungszeit um den Faktor 3.4 und vergrößert gleichzeitig den erreichten SRA erheblich. Die Ergebnisse deuten darauf hin, dass die vorgeschlagene verschachtelte Stimuluspräsentation und die nachgelagerte EKP-Verarbeitungsmethodik zur schnellen, zuverlässigen Extraktion neuronaler Korrelate der gesamten auditorischen Verarbeitung im Rahmen zukünftiger Studien geeignet sind.Bundesministerium für Bildung und Forschung | Bimodal Fusion - Eine neurotechnologische Optimierungsarchitektur für integrierte bimodale Hörsysteme | 2016-201

    Optimizing Stimulation Strategies in Cochlear Implants for Music Listening

    Get PDF
    Most cochlear implant (CI) strategies are optimized for speech characteristics while music enjoyment is signicantly below normal hearing performance. In this thesis, electrical stimulation strategies in CIs are analyzed for music input. A simulation chain consisting of two parallel paths, simulating normal hearing conditions and electrical hearing respectively, is utilized. One thesis objective is to congure and develop the sound processor of the CI chain to analyze dierent compression- and channel selection strategies to optimally capture the characteristics of music signals. A new set of knee points (KPs) for the compression function are investigated together with clustering of frequency bands. The N-of-M electrode selection strategy models the eect of a psychoacoustic masking threshold. In order to evaluate the performance of the CI model, the normal hearing model is considered a true reference. Similarity among the resulting neurograms of respective model are measured using the image analysis method Neurogram Similarity Index Measure (NSIM). The validation and resolution of NSIM is another objective of the thesis. Results indicate that NSIM is sensitive to no-activity regions in the neurograms and has diculties capturing small CI changes, i.e. compression settings. Further verication of the model setup is suggested together with investigating an alternative optimal electric hearing reference and/or objective similarity measure

    On the Relationship Between Short-Time Objective Intelligibility and Short-Time Spectral-Amplitude Mean-Square Error for Speech Enhancement

    Get PDF
    The majority of deep neural network (DNN) based speech enhancement algorithms rely on the mean-square error (MSE) criterion of short-time spectral amplitudes (STSA), which has no apparent link to human perception, e.g. speech intelligibility. Short-Time Objective Intelligibility (STOI), a popular state-of-the-art speech intelligibility estimator, on the other hand, relies on linear correlation of speech temporal envelopes. This raises the question if a DNN training criterion based on envelope linear correlation (ELC) can lead to improved speech intelligibility performance of DNN based speech enhancement algorithms compared to algorithms based on the STSA-MSE criterion. In this paper we derive that, under certain general conditions, the STSA-MSE and ELC criteria are practically equivalent, and we provide empirical data to support our theoretical results. Furthermore, our experimental findings suggest that the standard STSA minimum-MSE estimator is near optimal, if the objective is to enhance noisy speech in a manner which is optimal with respect to the STOI speech intelligibility estimator
    • …
    corecore