649 research outputs found

    Robust Image Recognition Based on a New Supervised Kernel Subspace Learning Method

    Get PDF
    Fecha de lectura de Tesis Doctoral: 13 de septiembre 2019Image recognition is a term for computer technologies that can recognize certain people, objects or other targeted subjects through the use of algorithms and machine learning concepts. Face recognition is one of the most popular techniques to achieve the goal of figuring out the identity of a person. This study has been conducted to develop a new non-linear subspace learning method named β€œsupervised kernel locality-based discriminant neighborhood embedding,” which performs data classification by learning an optimum embedded subspace from a principal high dimensional space. In this approach, not only is a nonlinear and complex variation of face images effectively represented using nonlinear kernel mapping, but local structure information of data from the same class and discriminant information from distinct classes are also simultaneously preserved to further improve final classification performance. Moreover, to evaluate the robustness of the proposed method, it was compared with several well-known pattern recognition methods through comprehensive experiments with six publicly accessible datasets. In this research, we particularly focus on face recognition however, two other types of databases rather than face databases are also applied to well investigate the implementation of our algorithm. Experimental results reveal that our method consistently outperforms its competitors across a wide range of dimensionality on all the datasets. SKLDNE method has reached 100 percent of recognition rate for Tn=17 on the Sheffield, 9 on the Yale, 8 on the ORL, 7 on the Finger vein and 11on the Finger Knuckle respectively, while the results are much lower for other methods. This demonstrates the robustness and effectiveness of the proposed method

    Face Recognition: Issues, Methods and Alternative Applications

    Get PDF
    Face recognition, as one of the most successful applications of image analysis, has recently gained significant attention. It is due to availability of feasible technologies, including mobile solutions. Research in automatic face recognition has been conducted since the 1960s, but the problem is still largely unsolved. Last decade has provided significant progress in this area owing to advances in face modelling and analysis techniques. Although systems have been developed for face detection and tracking, reliable face recognition still offers a great challenge to computer vision and pattern recognition researchers. There are several reasons for recent increased interest in face recognition, including rising public concern for security, the need for identity verification in the digital world, face analysis and modelling techniques in multimedia data management and computer entertainment. In this chapter, we have discussed face recognition processing, including major components such as face detection, tracking, alignment and feature extraction, and it points out the technical challenges of building a face recognition system. We focus on the importance of the most successful solutions available so far. The final part of the chapter describes chosen face recognition methods and applications and their potential use in areas not related to face recognition

    Symmetric Subspace Learning for Image Analysis

    Get PDF

    λ”₯λŸ¬λ‹μ„ ν™œμš©ν•œ μŠ€νƒ€μΌ μ μ‘ν˜• μŒμ„± ν•©μ„± 기법

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·컴퓨터곡학뢀, 2020. 8. κΉ€λ‚¨μˆ˜.The neural network-based speech synthesis techniques have been developed over the years. Although neural speech synthesis has shown remarkable generated speech quality, there are still remaining problems such as modeling power in a neural statistical parametric speech synthesis system, style expressiveness, and robust attention model in the end-to-end speech synthesis system. In this thesis, novel alternatives are proposed to resolve these drawbacks of the conventional neural speech synthesis system. In the first approach, we propose an adversarially trained variational recurrent neural network (AdVRNN), which applies a variational recurrent neural network (VRNN) to represent the variability of natural speech for acoustic modeling in neural statistical parametric speech synthesis. Also, we apply an adversarial learning scheme in training AdVRNN to overcome the oversmoothing problem. From the experimental results, we have found that the proposed AdVRNN based method outperforms the conventional RNN-based techniques. In the second approach, we propose a novel style modeling method employing mutual information neural estimator (MINE) in a style-adaptive end-to-end speech synthesis system. MINE is applied to increase target-style information and suppress text information in style embedding by applying MINE loss term in the loss function. The experimental results show that the MINE-based method has shown promising performance in both speech quality and style similarity for the global style token-Tacotron. In the third approach, we propose a novel attention method called memory attention for end-to-end speech synthesis, which is inspired by the gating mechanism of long-short term memory (LSTM). Leveraging the gating technique's sequence modeling power in LSTM, memory attention obtains the stable alignment from the content-based and location-based features. We evaluate the memory attention and compare its performance with various conventional attention techniques in single speaker and emotional speech synthesis scenarios. From the results, we conclude that memory attention can generate speech with large variability robustly. In the last approach, we propose selective multi-attention for style-adaptive end-to-end speech synthesis systems. The conventional single attention model may limit the expressivity representing numerous alignment paths depending on style. To achieve a variation in attention alignment, we propose using a multi-attention model with a selection network. The multi-attention plays a role in generating candidates for the target style, and the selection network choose the most proper attention among the multi-attention. The experimental results show that selective multi-attention outperforms the conventional single attention techniques in multi-speaker speech synthesis and emotional speech synthesis.λ”₯λŸ¬λ‹ 기반의 μŒμ„± ν•©μ„± κΈ°μˆ μ€ μ§€λ‚œ λͺ‡ λ…„κ°„ νš”λ°œν•˜κ²Œ 개발되고 μžˆλ‹€. λ”₯λŸ¬λ‹μ˜ λ‹€μ–‘ν•œ 기법을 μ‚¬μš©ν•˜μ—¬ μŒμ„± ν•©μ„± ν’ˆμ§ˆμ€ λΉ„μ•½μ μœΌλ‘œ λ°œμ „ν–ˆμ§€λ§Œ, 아직 λ”₯λŸ¬λ‹ 기반의 μŒμ„± ν•©μ„±μ—λŠ” μ—¬λŸ¬ λ¬Έμ œκ°€ μ‘΄μž¬ν•œλ‹€. λ”₯λŸ¬λ‹ 기반의 톡계적 νŒŒλΌλ―Έν„° κΈ°λ²•μ˜ 경우 음ν–₯ λͺ¨λΈμ˜ deterministicν•œ λͺ¨λΈμ„ ν™œμš©ν•˜μ—¬ λͺ¨λΈλ§ λŠ₯λ ₯의 ν•œκ³„κ°€ 있으며, μ’…λ‹¨ν˜• λͺ¨λΈμ˜ 경우 μŠ€νƒ€μΌμ„ ν‘œν˜„ν•˜λŠ” λŠ₯λ ₯κ³Ό κ°•μΈν•œ μ–΄ν…μ…˜(attention)에 λŒ€ν•œ μ΄μŠˆκ°€ λŠμž„μ—†μ΄ 재기되고 μžˆλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” μ΄λŸ¬ν•œ 기쑴의 λ”₯λŸ¬λ‹ 기반 μŒμ„± ν•©μ„± μ‹œμŠ€ν…œμ˜ 단점을 ν•΄κ²°ν•  μƒˆλ‘œμš΄ λŒ€μ•ˆμ„ μ œμ•ˆν•œλ‹€. 첫 번째 μ ‘κ·Όλ²•μœΌλ‘œμ„œ, λ‰΄λŸ΄ 톡계적 νŒŒλΌλ―Έν„° λ°©μ‹μ˜ 음ν–₯ λͺ¨λΈλ§μ„ κ³ λ„ν™”ν•˜κΈ° μœ„ν•œ adversarially trained variational recurrent neural network (AdVRNN) 기법을 μ œμ•ˆν•œλ‹€. AdVRNN 기법은 VRNN을 μŒμ„± 합성에 μ μš©ν•˜μ—¬ μŒμ„±μ˜ λ³€ν™”λ₯Ό stochastic ν•˜κ³  μžμ„Έν•˜κ²Œ λͺ¨λΈλ§ν•  수 μžˆλ„λ‘ ν•˜μ˜€λ‹€. λ˜ν•œ, μ λŒ€μ  ν•™μŠ΅μ (adversarial learning) 기법을 ν™œμš©ν•˜μ—¬ oversmoothing 문제λ₯Ό μ΅œμ†Œν™” μ‹œν‚€λ„λ‘ ν•˜μ˜€λ‹€. μ΄λŸ¬ν•œ μ œμ•ˆλœ μ•Œκ³ λ¦¬μ¦˜μ€ 기쑴의 μˆœν™˜ 신경망 기반의 음ν–₯ λͺ¨λΈκ³Ό λΉ„κ΅ν•˜μ—¬ μ„±λŠ₯이 ν–₯상됨을 ν™•μΈν•˜μ˜€λ‹€. 두 번째 μ ‘κ·Όλ²•μœΌλ‘œμ„œ, μŠ€νƒ€μΌ μ μ‘ν˜• μ’…λ‹¨ν˜• μŒμ„± ν•©μ„± 기법을 μœ„ν•œ μƒν˜Έ μ •λ³΄λŸ‰ 기반의 μƒˆλ‘œμš΄ ν•™μŠ΅ 기법을 μ œμ•ˆν•œλ‹€. 기쑴의 global style token(GST) 기반의 μŠ€νƒ€μΌ μŒμ„± ν•©μ„± κΈ°λ²•μ˜ 경우, 비지도 ν•™μŠ΅μ„ μ‚¬μš©ν•˜λ―€λ‘œ μ›ν•˜λŠ” λͺ©ν‘œ μŠ€νƒ€μΌμ΄ μžˆμ–΄λ„ 이λ₯Ό μ€‘μ μ μœΌλ‘œ ν•™μŠ΅μ‹œν‚€κΈ° μ–΄λ €μ› λ‹€. 이λ₯Ό ν•΄κ²°ν•˜κΈ° μœ„ν•΄ GST의 좜λ ₯κ³Ό λͺ©ν‘œ μŠ€νƒ€μΌ μž„λ² λ”© λ²‘ν„°μ˜ μƒν˜Έ μ •λ³΄λŸ‰μ„ μ΅œλŒ€ν™” ν•˜λ„λ‘ ν•™μŠ΅ μ‹œν‚€λŠ” 기법을 μ œμ•ˆν•˜μ˜€λ‹€. μƒν˜Έ μ •λ³΄λŸ‰μ„ μ’…λ‹¨ν˜• λͺ¨λΈμ˜ μ†μ‹€ν•¨μˆ˜μ— μ μš©ν•˜κΈ° μœ„ν•΄μ„œ mutual information neural estimator(MINE) 기법을 λ„μž…ν•˜μ˜€κ³  λ‹€ν™”μž λͺ¨λΈμ„ 톡해 기쑴의 GST 기법에 λΉ„ν•΄ λͺ©ν‘œ μŠ€νƒ€μΌμ„ 보닀 μ€‘μ μ μœΌλ‘œ ν•™μŠ΅μ‹œν‚¬ 수 μžˆμŒμ„ ν™•μΈν•˜μ˜€λ‹€. μ„Έλ²ˆμ§Έ μ ‘κ·Όλ²•μœΌλ‘œμ„œ, κ°•μΈν•œ μ’…λ‹¨ν˜• μŒμ„± ν•©μ„±μ˜ μ–΄ν…μ…˜μΈ memory attention을 μ œμ•ˆν•œλ‹€. Long-short term memory(LSTM)의 gating κΈ°μˆ μ€ sequenceλ₯Ό λͺ¨λΈλ§ν•˜λŠ”데 높은 μ„±λŠ₯을 보여왔닀. μ΄λŸ¬ν•œ κΈ°μˆ μ„ μ–΄ν…μ…˜μ— μ μš©ν•˜μ—¬ λ‹€μ–‘ν•œ μŠ€νƒ€μΌμ„ 가진 μŒμ„±μ—μ„œλ„ μ–΄ν…μ…˜μ˜ λŠκΉ€, 반볡 등을 μ΅œμ†Œν™”ν•  수 μžˆλŠ” 기법을 μ œμ•ˆν•œλ‹€. 단일 ν™”μžμ™€ 감정 μŒμ„± ν•©μ„± 기법을 ν† λŒ€λ‘œ memory attention의 μ„±λŠ₯을 ν™•μΈν•˜μ˜€μœΌλ©° κΈ°μ‘΄ 기법 λŒ€λΉ„ 보닀 μ•ˆμ •μ μΈ μ–΄ν…μ…˜ 곑선을 얻을 수 μžˆμŒμ„ ν™•μΈν•˜μ˜€λ‹€. λ§ˆμ§€λ§‰ μ ‘κ·Όλ²•μœΌλ‘œμ„œ, selective multi-attention (SMA)을 ν™œμš©ν•œ μŠ€νƒ€μΌ μ μ‘ν˜• μ’…λ‹¨ν˜• μŒμ„± ν•©μ„± μ–΄ν…μ…˜ 기법을 μ œμ•ˆν•œλ‹€. 기쑴의 μŠ€νƒ€μΌ μ μ‘ν˜• μ’…λ‹¨ν˜• μŒμ„± ν•©μ„±μ˜ μ—°κ΅¬μ—μ„œλŠ” 낭독체 λ‹¨μΌν™”μžμ˜ κ²½μš°μ™€ 같은 단일 μ–΄ν…μ…˜μ„ μ‚¬μš©ν•˜μ—¬ μ™”λ‹€. ν•˜μ§€λ§Œ μŠ€νƒ€μΌ μŒμ„±μ˜ 경우 보닀 λ‹€μ–‘ν•œ μ–΄ν…μ…˜ ν‘œν˜„μ„ μš”κ΅¬ν•œλ‹€. 이λ₯Ό μœ„ν•΄ 닀쀑 μ–΄ν…μ…˜μ„ ν™œμš©ν•˜μ—¬ 후보듀을 μƒμ„±ν•˜κ³  이λ₯Ό 선택 λ„€νŠΈμ›Œν¬λ₯Ό ν™œμš©ν•˜μ—¬ 졜적의 μ–΄ν…μ…˜μ„ μ„ νƒν•˜λŠ” 기법을 μ œμ•ˆν•œλ‹€. SMA 기법은 기쑴의 μ–΄ν…μ…˜κ³Όμ˜ 비ꡐ μ‹€ν—˜μ„ ν†΅ν•˜μ—¬ 보닀 λ§Žμ€ μŠ€νƒ€μΌμ„ μ•ˆμ •μ μœΌλ‘œ ν‘œν˜„ν•  수 μžˆμŒμ„ ν™•μΈν•˜μ˜€λ‹€.1 Introduction 1 1.1 Background 1 1.2 Scope of thesis 3 2 Neural Speech Synthesis System 7 2.1 Overview of a Neural Statistical Parametric Speech Synthesis System 7 2.2 Overview of End-to-end Speech Synthesis System 9 2.3 Tacotron2 10 2.4 Attention Mechanism 12 2.4.1 Location Sensitive Attention 12 2.4.2 Forward Attention 13 2.4.3 Dynamic Convolution Attention 14 3 Neural Statistical Parametric Speech Synthesis using AdVRNN 17 3.1 Introduction 17 3.2 Background 19 3.2.1 Variational Autoencoder 19 3.2.2 Variational Recurrent Neural Network 20 3.3 Speech Synthesis Using AdVRNN 22 3.3.1 AdVRNN based Acoustic Modeling 23 3.3.2 Training Procedure 24 3.4 Experiments 25 3.4.1 Objective performance evaluation 28 3.4.2 Subjective performance evaluation 29 3.5 Summary 29 4 Speech Style Modeling Method using Mutual Information for End-to-End Speech Synthesis 31 4.1 Introduction 31 4.2 Background 33 4.2.1 Mutual Information 33 4.2.2 Mutual Information Neural Estimator 34 4.2.3 Global Style Token 34 4.3 Style Token end-to-end speech synthesis using MINE 35 4.4 Experiments 36 4.5 Summary 38 5 Memory Attention: Robust Alignment using Gating Mechanism for End-to-End Speech Synthesis 45 5.1 Introduction 45 5.2 BACKGROUND 48 5.3 Memory Attention 49 5.4 Experiments 52 5.4.1 Experiments on Single Speaker Speech Synthesis 53 5.4.2 Experiments on Emotional Speech Synthesis 56 5.5 Summary 59 6 Selective Multi-attention for style-adaptive end-to-End Speech Syn-thesis 63 6.1 Introduction 63 6.2 BACKGROUND 65 6.3 Selective multi-attention model 66 6.4 EXPERIMENTS 67 6.4.1 Multi-speaker speech synthesis experiments 68 6.4.2 Experiments on Emotional Speech Synthesis 73 6.5 Summary 77 7 Conclusions 79 Bibliography 83 μš”μ•½ 93 κ°μ‚¬μ˜ κΈ€ 95Docto

    Meta-learning with Latent Space Clustering in Generative Adversarial Network for Speaker Diarization

    Full text link
    The performance of most speaker diarization systems with x-vector embeddings is both vulnerable to noisy environments and lacks domain robustness. Earlier work on speaker diarization using generative adversarial network (GAN) with an encoder network (ClusterGAN) to project input x-vectors into a latent space has shown promising performance on meeting data. In this paper, we extend the ClusterGAN network to improve diarization robustness and enable rapid generalization across various challenging domains. To this end, we fetch the pre-trained encoder from the ClusterGAN and fine-tune it by using prototypical loss (meta-ClusterGAN or MCGAN) under the meta-learning paradigm. Experiments are conducted on CALLHOME telephonic conversations, AMI meeting data, DIHARD II (dev set) which includes challenging multi-domain corpus, and two child-clinician interaction corpora (ADOS, BOSCC) related to the autism spectrum disorder domain. Extensive analyses of the experimental data are done to investigate the effectiveness of the proposed ClusterGAN and MCGAN embeddings over x-vectors. The results show that the proposed embeddings with normalized maximum eigengap spectral clustering (NME-SC) back-end consistently outperform Kaldi state-of-the-art z-vector diarization system. Finally, we employ embedding fusion with x-vectors to provide further improvement in diarization performance. We achieve a relative diarization error rate (DER) improvement of 6.67% to 53.93% on the aforementioned datasets using the proposed fused embeddings over x-vectors. Besides, the MCGAN embeddings provide better performance in the number of speakers estimation and short speech segment diarization as compared to x-vectors and ClusterGAN in telephonic data.Comment: Submitted to IEEE/ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSIN

    A Review of Deep Learning Techniques for Speech Processing

    Full text link
    The field of speech processing has undergone a transformative shift with the advent of deep learning. The use of multiple processing layers has enabled the creation of models capable of extracting intricate features from speech data. This development has paved the way for unparalleled advancements in speech recognition, text-to-speech synthesis, automatic speech recognition, and emotion recognition, propelling the performance of these tasks to unprecedented heights. The power of deep learning techniques has opened up new avenues for research and innovation in the field of speech processing, with far-reaching implications for a range of industries and applications. This review paper provides a comprehensive overview of the key deep learning models and their applications in speech-processing tasks. We begin by tracing the evolution of speech processing research, from early approaches, such as MFCC and HMM, to more recent advances in deep learning architectures, such as CNNs, RNNs, transformers, conformers, and diffusion models. We categorize the approaches and compare their strengths and weaknesses for solving speech-processing tasks. Furthermore, we extensively cover various speech-processing tasks, datasets, and benchmarks used in the literature and describe how different deep-learning networks have been utilized to tackle these tasks. Additionally, we discuss the challenges and future directions of deep learning in speech processing, including the need for more parameter-efficient, interpretable models and the potential of deep learning for multimodal speech processing. By examining the field's evolution, comparing and contrasting different approaches, and highlighting future directions and challenges, we hope to inspire further research in this exciting and rapidly advancing field
    • …
    corecore