9,745 research outputs found

    A Review of Deep Learning Techniques for Speech Processing

    Full text link
    The field of speech processing has undergone a transformative shift with the advent of deep learning. The use of multiple processing layers has enabled the creation of models capable of extracting intricate features from speech data. This development has paved the way for unparalleled advancements in speech recognition, text-to-speech synthesis, automatic speech recognition, and emotion recognition, propelling the performance of these tasks to unprecedented heights. The power of deep learning techniques has opened up new avenues for research and innovation in the field of speech processing, with far-reaching implications for a range of industries and applications. This review paper provides a comprehensive overview of the key deep learning models and their applications in speech-processing tasks. We begin by tracing the evolution of speech processing research, from early approaches, such as MFCC and HMM, to more recent advances in deep learning architectures, such as CNNs, RNNs, transformers, conformers, and diffusion models. We categorize the approaches and compare their strengths and weaknesses for solving speech-processing tasks. Furthermore, we extensively cover various speech-processing tasks, datasets, and benchmarks used in the literature and describe how different deep-learning networks have been utilized to tackle these tasks. Additionally, we discuss the challenges and future directions of deep learning in speech processing, including the need for more parameter-efficient, interpretable models and the potential of deep learning for multimodal speech processing. By examining the field's evolution, comparing and contrasting different approaches, and highlighting future directions and challenges, we hope to inspire further research in this exciting and rapidly advancing field

    Learning disentangled speech representations

    Get PDF
    A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody. The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions. In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks. This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically

    SCNet: Sparse Compression Network for Music Source Separation

    Full text link
    Deep learning-based methods have made significant achievements in music source separation. However, obtaining good results while maintaining a low model complexity remains challenging in super wide-band music source separation. Previous works either overlook the differences in subbands or inadequately address the problem of information loss when generating subband features. In this paper, we propose SCNet, a novel frequency-domain network to explicitly split the spectrogram of the mixture into several subbands and introduce a sparsity-based encoder to model different frequency bands. We use a higher compression ratio on subbands with less information to improve the information density and focus on modeling subbands with more information. In this way, the separation performance can be significantly improved using lower computational consumption. Experiment results show that the proposed model achieves a signal to distortion ratio (SDR) of 9.0 dB on the MUSDB18-HQ dataset without using extra data, which outperforms state-of-the-art methods. Specifically, SCNet's CPU inference time is only 48% of HT Demucs, one of the previous state-of-the-art models.Comment: Accepted by ICASSP 202

    Artificial Intelligence for Multimedia Signal Processing

    Get PDF
    Artificial intelligence technologies are also actively applied to broadcasting and multimedia processing technologies. A lot of research has been conducted in a wide variety of fields, such as content creation, transmission, and security, and these attempts have been made in the past two to three years to improve image, video, speech, and other data compression efficiency in areas related to MPEG media processing technology. Additionally, technologies such as media creation, processing, editing, and creating scenarios are very important areas of research in multimedia processing and engineering. This book contains a collection of some topics broadly across advanced computational intelligence algorithms and technologies for emerging multimedia signal processing as: Computer vision field, speech/sound/text processing, and content analysis/information mining

    A Survey on Generative Diffusion Model

    Full text link
    Deep learning shows excellent potential in generation tasks thanks to deep latent representation. Generative models are classes of models that can generate observations randomly concerning certain implied parameters. Recently, the diffusion Model has become a rising class of generative models by its power-generating ability. Nowadays, great achievements have been reached. More applications except for computer vision, speech generation, bioinformatics, and natural language processing are to be explored in this field. However, the diffusion model has its genuine drawback of a slow generation process, single data types, low likelihood, and the inability for dimension reduction. They are leading to many enhanced works. This survey makes a summary of the field of the diffusion model. We first state the main problem with two landmark works -- DDPM and DSM, and a unified landmark work -- Score SDE. Then, we present improved techniques for existing problems in the diffusion-based model field, including speed-up improvement For model speed-up improvement, data structure diversification, likelihood optimization, and dimension reduction. Regarding existing models, we also provide a benchmark of FID score, IS, and NLL according to specific NFE. Moreover, applications with diffusion models are introduced including computer vision, sequence modeling, audio, and AI for science. Finally, there is a summarization of this field together with limitations \& further directions. The summation of existing well-classified methods is in our Github:https://github.com/chq1155/A-Survey-on-Generative-Diffusion-Model

    Deep Learning Based Speech Enhancement and Its Application to Speech Recognition

    Get PDF
    Speech enhancement is the task that aims to improve the quality and the intelligibility of a speech signal that is degraded by ambient noise and room reverberation. Speech enhancement algorithms are used extensively in many audio- and communication systems, including mobile handsets, speech recognition, speaker verification systems and hearing aids. Recently, deep learning has achieved great success in many applications, such as computer vision, nature language processing and speech recognition. Speech enhancement methods have been introduced that use deep-learning techniques, as these techniques are capable of learning complex hierarchical functions using large-scale training data. This dissertation investigates the deep learning based speech enhancement and its application to robust Automatic Speech Recognition (ASR). We start our work by exploring generative adversarial network (GAN) based speech enhancement. We explore the techniques to extract information about the noise to aid in the reconstruction of the speech signals. The proposed framework, referred to as ForkGAN, is a novel general adversarial learning-based framework that combines deep-learning with conventional noise reduction techniques. We further extend ForkGAN to M-ForkGAN, which integrates feature mapping and mask learning into a unified framework using ForkGAN. Another variant of ForkGAN, named S-ForkGAN, operates on spectral-domain features, which could directly apply to ASR. Systematic evaluations demonstrate the effectiveness of the proposed approaches. Then, we propose a novel multi-stage learning speech enhancement system. Each stage comprises a self-attention (SA) block followed by stacks of temporal convolutional network (TCN) blocks with doubling dilation factors. Each stage generates a prediction that is refined in a subsequent stage. A fusion block is inserted at the input of later stages to re-inject original information. Moreover, we design several multi-scale architectures with perceptual loss. Experiments show that our proposed architectures can achieve the state of the art performance on several public datasets. Recently, modeling to learn the acoustic noisy-clean speech mapping has been enhanced by including auxiliary information such as visual cues, phonetic and linguistic information, and speaker information. We propose a novel speaker-aware speech enhancement (SASE) method that extracts speaker information from a clean reference using long short-term memory (LSTM) layers, and then uses a convolutional recurrent neural network (CRN) to embed the extracted speaker information. The SASE framework is extended with a self-attention mechanism. It is shown that a few seconds of clean reference speech is sufficient, and that the proposed SASE method performs well for a wide range of scenarios. Even though speech enhancement methods that are based on deep learning have demonstrated state-of-the-art performance when compared with conventional methodologies, current deep learning approaches heavily rely on supervised learning, which requires a large number of noisy- and clean-speech sample pairs for training. This is generally not practical in a realistic environment. One cannot simultaneously obtain both noisy and clean speech samples. Thus, most speech enhancement approaches are trained with simulated speech and clean targets. In addition, it would be hard to collect large-scale dataset for the low-resource languages. We propose a novel noise-to-noise speech enhancement (N2N-SE) method that addresses the parallel noisy-clean training data issue, we leverage signal reconstruction techniques by only using corrupted speech. The proposed N2N-SE framework includes a noise conversion module that is an auto-encoder that learns to mix noise with speech, and a speech enhancement module, that learns to reconstruct corrupted speech signals. In addition to additive noise, speech is also affected by reverberation, which is caused by the attenuated and delayed reflections of sound waves. These distortions, particularly when combined, can severely degrade speech intelligibility for human listeners and impact applications, e.g., automatic speech recognition (ASR) and speaker recognition. Thus, effective speech denoising and dereverberation will benefit both speech processing applications and human listeners. We investigate the deep-learning based approaches for both speech dereverberation and speech denoising using the cascade Conformer architecture. The experimental results show that the proposed cascade Conformer can be effective to suppress the noise and reverberation

    Pathway to Future Symbiotic Creativity

    Full text link
    This report presents a comprehensive view of our vision on the development path of the human-machine symbiotic art creation. We propose a classification of the creative system with a hierarchy of 5 classes, showing the pathway of creativity evolving from a mimic-human artist (Turing Artists) to a Machine artist in its own right. We begin with an overview of the limitations of the Turing Artists then focus on the top two-level systems, Machine Artists, emphasizing machine-human communication in art creation. In art creation, it is necessary for machines to understand humans' mental states, including desires, appreciation, and emotions, humans also need to understand machines' creative capabilities and limitations. The rapid development of immersive environment and further evolution into the new concept of metaverse enable symbiotic art creation through unprecedented flexibility of bi-directional communication between artists and art manifestation environments. By examining the latest sensor and XR technologies, we illustrate the novel way for art data collection to constitute the base of a new form of human-machine bidirectional communication and understanding in art creation. Based on such communication and understanding mechanisms, we propose a novel framework for building future Machine artists, which comes with the philosophy that a human-compatible AI system should be based on the "human-in-the-loop" principle rather than the traditional "end-to-end" dogma. By proposing a new form of inverse reinforcement learning model, we outline the platform design of machine artists, demonstrate its functions and showcase some examples of technologies we have developed. We also provide a systematic exposition of the ecosystem for AI-based symbiotic art form and community with an economic model built on NFT technology. Ethical issues for the development of machine artists are also discussed

    Improving Dysarthric Speech Recognition by Enriching Training Datasets

    Get PDF
    Dysarthria is a motor speech disorder that results from disruptions in the neuro-motor interface and is characterised by poor articulation of phonemes and hyper-nasality and is characteristically different from normal speech. Many modern automatic speech recognition systems focus on a narrow range of speech diversity therefore as a consequence of this they exclude a groups of speakers who deviate in aspects of gender, race, age and speech impairment when building training datasets. This study attempts to develop an automatic speech recognition system that deals with dysarthric speech with limited dysarthric speech data. Speech utterances collected from the TORGO database are used to conduct experiments on a wav2vec2.0 model only trained on the Librispeech 960h dataset to obtain a baseline performance of the word error rate (WER) when recognising dysarthric speech. A version of the Librispeech model fine-tuned on multi-language datasets was tested to see if it would improve accuracy and achieved a top reduction of 24.15% in the WER for one of the male dysarthric speakers in the dataset. Transfer learning with speech recognition models and preprocessing dysarthric speech to improve its intelligibility by using general adversarial networks were limited in their potential due to a lack of dysarthric speech dataset of adequate size to use these technologies. The main conclusion drawn from this study is that a large diverse dysarthric speech dataset comparable to the size of datasets used to train machine learning ASR systems like Librispeech,with different types of speech, scripted and unscripted, is required to improve performance.

    Sequential decision modeling in uncertain conditions

    Full text link
    Cette thĂšse consiste en une sĂ©rie d’approches pour la modĂ©lisation de dĂ©cision structurĂ©e - c’est-Ă -dire qu’elle propose des solutions utilisant des modĂšles gĂ©nĂ©ratifs pour des tĂąches intĂ©grant plusieurs entrĂ©es et sorties, ces entrĂ©es et sorties Ă©tant dictĂ©es par des interactions complexes entre leurs Ă©lĂ©ments. Un aspect crucial de ces problĂšmes est la prĂ©sence en plus d’un rĂ©sultat correct, des rĂ©sultats structurellement diffĂ©rents mais considĂ©rĂ©s tout aussi corrects, rĂ©sultant d’une grande mais nĂ©cessaire incertitude sur les sorties du systĂšme. Cette thĂšse prĂ©sente quatre articles sur ce sujet, se concentrent en particulier sur le domaine de la synthĂšse vocale Ă  partir de texte, gĂ©nĂ©ration symbolique de musique, traitement de texte, reconnaissance automatique de la parole, et apprentissage de reprĂ©sentations pour la parole et le texte. Chaque article prĂ©sente une approche particuliĂšre Ă  un problĂšme dans ces domaines respectifs, en proposant et Ă©tudiant des architectures profondes pour ces domaines. Bien que ces techniques d’apprentissage profond utilisĂ©es dans ces articles sont suffisamment versatiles et expressives pour ĂȘtre utilisĂ©es dans d’autres domaines, nous resterons concentrĂ©s sur les applications dĂ©crites dans chaque article. Le premier article prĂ©sente une approche permettant le contrĂŽle dĂ©taillĂ©, au niveau phonĂ©tique et symbolique, d’un systĂšme de synthĂšse vocale, en utilisant une mĂ©thode d’échange efficace permettant de combiner des reprĂ©sentations Ă  un niveau lexical. Puisque cette combinaison permet un contrĂŽle proportionnĂ© sur les conditions d’entrĂ©e, et amĂ©liore les prononciations faisant uniquement usage de caractĂšres, ce systĂšme de combinaison pour la synthĂšse vocale a Ă©tĂ© prĂ©fĂ©rĂ© durant des tests A/B par rapport Ă  des modĂšles de rĂ©fĂ©rence Ă©quivalents utilisant les mĂȘmes modalitĂ©s. Le deuxiĂšme article se concentre sur un autre systĂšme de synthĂšse vocale, cette fois-ci centrĂ© sur la construction d’une reprĂ©sentation multi-Ă©chelle de la parole Ă  travers une dĂ©composition structurĂ©e des descripteurs audio. En particulier, l’intĂ©rĂȘt de ce travail est dans sa mĂ©thodologie Ă©conome en calcul malgrĂ© avoir Ă©tĂ© bĂąti Ă  partir de travaux antĂ©rieurs beaucoup plus demandant en ressources de calcul. Afin de bien pouvoir faire de la synthĂšse vocale sous ces contraintes computationelles, plusieurs nouvelles composantes ont Ă©tĂ© conçues et intĂ©grĂ©es Ă  ce qui devient un modĂšle efficace de synthĂšse vocale. Le troisiĂšme article un nouveau modĂšle auto-rĂ©gressif pour modĂ©liser des chaĂźnes de symboles. Ce modĂšle fait usage de prĂ©dictions et d’estimations itĂ©rative et rĂ©pĂ©tĂ©es afin de construire une sortie structurĂ©e respectant plusieurs contraintes correspondant au domaine sous-jacent. Ce modĂšle est testĂ© dans le cadre de la gĂ©nĂ©ration symbolique de musique et la modĂ©lisation de texte, faisant preuve d’excellentes performances en particulier quand la quantitĂ© de donnĂ©es s’avĂšre limitĂ©e. Le dernier article de la thĂšse se concentre sur l’étude des reprĂ©sentations pour la parole et le texte apprise Ă  partir d’un systĂšme de reconnaissance vocale d’un travail antĂ©rieur. À travers une sĂ©rie d’études systĂ©matiques utilisant des modĂšles prĂ©-entraĂźnĂ©s de texte et de durĂ©e, relations qualitatives entre les donnĂ©es de texte et de parole, et Ă©tudes de performance sur la rĂ©cupĂ©ration transmodal “few shot”, nous exposons plusieurs propriĂ©tĂ©s essentielles sous-jacent Ă  la performance du systĂšme, ouvrant la voie pour des dĂ©veloppements algorithmiques futurs. De plus, les diffĂ©rents modĂšles rĂ©sultants de cette Ă©tude obtiennent des rĂ©sultats impressionnants sur un nombre de tĂąches de rĂ©fĂ©rence utilisant des modĂšles prĂ©-entraĂźnĂ© transfĂ©rĂ© sans modification.This thesis presents a sequence of approaches to structured decision modeling - that is, proposing generative solutions to tasks with multiple inputs and outputs, featuring complicated interactions between input elements and output elements. Crucially, these problems also include a high amount of uncertainty about the correct outcome and many largely equivalent but structurally different outcomes can be considered equally correct. This thesis presents four articles about these topics, particularly focusing on the domains of text-to-speech synthesis, symbolic music generation, text processing, automatic speech recognition, and speech-text representation learning. Each article presents a particular approach to solving problems in these respective domains, focused on proposing and understanding deep learning architectures for these domains. The deep learning techniques used in these articles are broadly applicable, flexible, and powerful enough that these general approaches may find application to other areas however we remain focused on the domains discussed in each respective article. The first article presents an approach allowing for flexible phonetic and character control of a text-to-speech system, utilizing an efficient "swap-out" method for blending representations at the word level. This blending allows for smooth control over input conditions, and also strengthens character only pronunciations, resulting in a preference for a blended text-to-speech system in A/B testing, compared to an equivalent baselines even when using the same input information modalities. The second article focuses on another text-to-speech system, this time centered on building multi-scale representations of speech audio using a structured decomposition of audio features. Particularly this work focuses on a compute efficient methodology, while building on prior work which requires a much greater computational budget than the proposed system. In order to effectively perform text-to-speech synthesis under these computational constraints, a number of new components are constructed and integrated, resulting in an efficient model for text-to-speech synthesis. The third article presents a new non-autoregressive model for modeling symbolic sequences. This model uses iterative prediction and re-estimation in order to build structured outputs, which respect numerous constraints in the underlying sequence domain. This model is applied to symbolic music modeling and text modeling, showing excellent performance particularly in limited data generative settings. The final article in this thesis focuses on understanding the speech-text representations learned by a text-injected speech recognition system from prior literature. Through a systematic series of studies utilizing pre-trained text and duration models, qualitative relations between text and speech sequences, and performance studies in few-shot cross-modal retrieval, we reveal a number of crucial properties underlying the performance of this system, paving the way for future algorithmic development. In addition, model variants built during this study achieve impressive performance results on a number of benchmark tasks using partially frozen and transferred parameters
    • 

    corecore