41 research outputs found

    Audio Deepfake Detection: A Survey

    Full text link
    Audio deepfake detection is an emerging active topic. A growing number of literatures have aimed to study deepfake detection algorithms and achieved effective performance, the problem of which is far from being solved. Although there are some review literatures, there has been no comprehensive survey that provides researchers with a systematic overview of these developments with a unified evaluation. Accordingly, in this survey paper, we first highlight the key differences across various types of deepfake audio, then outline and analyse competitions, datasets, features, classifications, and evaluation of state-of-the-art approaches. For each aspect, the basic techniques, advanced developments and major challenges are discussed. In addition, we perform a unified comparison of representative features and classifiers on ASVspoof 2021, ADD 2023 and In-the-Wild datasets for audio deepfake detection, respectively. The survey shows that future research should address the lack of large scale datasets in the wild, poor generalization of existing detection methods to unknown fake attacks, as well as interpretability of detection results

    Pemodelan CNN Untuk Deteksi Emosi Berbasis Speech Bahasa Indonesia

    Get PDF
    Perkembangan teknologi menunjukkan semakin banyak kebutuhan perangkat yang mampu menjalankan interaksi antara manusia dengan computer secara cerdas. Satu contohnya adalah sistem pengenalan emosi dengan computer. Di dalamnya diperlukan kemampuan untuk melakukan pengenalan, penafsiran, dan memberikan respons emosi yang diekspresikan dalam ucapan. Tetapi sampai saat ini penelilitan speech emotion recognition (SER) yang berbasis bahasa Indonesia masih sangat sedikit. Hal ini disebabkan keterbatasan korpus data berbahasa Indonesia untuk SER. Pada penelitian ini dibuat sistem SER dengan mengambil dataset dari TV series berbahasa Indonesia. Sistem dirancang dengan kemampuan untuk  melakukan proses klasifikasi emosi, yaitu empat kelas label emosi  marah, senang, netral dan sedih. Untuk implementasinya digunakan metode deep learning, yang dalam hal ini dipilih metode CNN. Pada sistem ini input berupa kombinasi dari tiga fitur, yaitu MFCC, frekuensi fundamental, dan RMSE. Dari eksperimen yang telah dijalankan telah diperoleh hasil terbaik untuk sistem SER berbahasa Indonesia dengan menggunakan input MFCC + frekuensi fundamental, yang menunjukkan tingkat akurasi sebesar 85%. Sedangkan ketika hanya menggunakan input MFCC memiliki tingkat akurasi sampai 83%. Sementara itu ketika dipaksakan dengan kombinasi ketiga input MFCC+ F0+ RMSE mengalami penurunan kinerja dan hanya mencapai akurasi 78% ,dan akurasi terendah menggunakan fitur MFCC + RMSE yaitu 72%. Dari study awal ini diharapkan mampu memberikan gambaran bagi para peneliti di bidang SER, tentang  bagaimana memilih fitur sinyal wicara sebagai input di dalam pengujian dan mempermudah untuk langkah pengembangan penelitiannya

    Transfer Learning for Improved Audio-Based Human Activity Recognition

    Get PDF
    Human activities are accompanied by characteristic sound events, the processing of which might provide valuable information for automated human activity recognition. This paper presents a novel approach addressing the case where one or more human activities are associated with limited audio data, resulting in a potentially highly imbalanced dataset. Data augmentation is based on transfer learning; more specifically, the proposed method: (a) identifies the classes which are statistically close to the ones associated with limited data; (b) learns a multiple input, multiple output transformation; and (c) transforms the data of the closest classes so that it can be used for modeling the ones associated with limited data. Furthermore, the proposed framework includes a feature set extracted out of signal representations of diverse domains, i.e., temporal, spectral, and wavelet. Extensive experiments demonstrate the relevance of the proposed data augmentation approach under a variety of generative recognition schemes

    Multimodal Sentiment Analysis Based on Deep Learning: Recent Progress

    Get PDF
    Multimodal sentiment analysis is an important research topic in the field of NLP, aiming to analyze speakers\u27 sentiment tendencies through features extracted from textual, visual, and acoustic modalities. Its main methods are based on machine learning and deep learning. Machine learning-based methods rely heavily on labeled data. But deep learning-based methods can overcome this shortcoming and capture the in-depth semantic information and modal characteristics of the data, as well as the interactive information between multimodal data. In this paper, we survey the deep learning-based methods, including fusion of text and image and fusion of text, image, audio, and video. Specifically, we discuss the main problems of these methods and the future directions. Finally, we review the work of multimodal sentiment analysis in conversation

    Signal Processing Using Non-invasive Physiological Sensors

    Get PDF
    Non-invasive biomedical sensors for monitoring physiological parameters from the human body for potential future therapies and healthcare solutions. Today, a critical factor in providing a cost-effective healthcare system is improving patients' quality of life and mobility, which can be achieved by developing non-invasive sensor systems, which can then be deployed in point of care, used at home or integrated into wearable devices for long-term data collection. Another factor that plays an integral part in a cost-effective healthcare system is the signal processing of the data recorded with non-invasive biomedical sensors. In this book, we aimed to attract researchers who are interested in the application of signal processing methods to different biomedical signals, such as an electroencephalogram (EEG), electromyogram (EMG), functional near-infrared spectroscopy (fNIRS), electrocardiogram (ECG), galvanic skin response, pulse oximetry, photoplethysmogram (PPG), etc. We encouraged new signal processing methods or the use of existing signal processing methods for its novel application in physiological signals to help healthcare providers make better decisions

    Deep spiking neural networks with applications to human gesture recognition

    Get PDF
    The spiking neural networks (SNNs), as the 3rd generation of Artificial Neural Networks (ANNs), are a class of event-driven neuromorphic algorithms that potentially have a wide range of application domains and are applicable to a variety of extremely low power neuromorphic hardware. The work presented in this thesis addresses the challenges of human gesture recognition using novel SNN algorithms. It discusses the design of these algorithms for both visual and auditory domain human gesture recognition as well as event-based pre-processing toolkits for audio signals. From the visual gesture recognition aspect, a novel SNN-based event-driven hand gesture recognition system is proposed. This system is shown to be effective in an experiment on hand gesture recognition with its spiking recurrent convolutional neural network (SCRNN) design, which combines both designed convolution operation and recurrent connectivity to maintain spatial and temporal relations with address-event-representation (AER) data. The proposed SCRNN architecture can achieve arbitrary temporal resolution, which means it can exploit temporal correlations between event collections. This design utilises a backpropagation-based training algorithm and does not suffer from gradient vanishing/explosion problems. From the audio perspective, a novel end-to-end spiking speech emotion recognition system (SER) is proposed. This system employs the MFCC as its main speech feature extractor as well as a self-designed latency coding algorithm to effciently convert the raw signal to AER input that can be used for SNN. A two-layer spiking recurrent architecture is proposed to address temporal correlations between spike trains. The robustness of this system is supported by several open public datasets, which demonstrate state of the arts recognition accuracy and a significant reduction in network size, computational costs, and training speed. In addition to directly contributing to neuromorphic SER, this thesis proposes a novel speech-coding algorithm based on the working mechanism of humans auditory organ system. The algorithm mimics the functionality of the cochlea and successfully provides an alternative method of event-data acquisition for audio-based data. The algorithm is then further simplified and extended into an application of speech enhancement which is jointly used in the proposed SER system. This speech-enhancement method uses the lateral inhibition mechanism as a frequency coincidence detector to remove uncorrelated noise in the time-frequency spectrum. The method is shown to be effective by experiments for up to six types of noise.The spiking neural networks (SNNs), as the 3rd generation of Artificial Neural Networks (ANNs), are a class of event-driven neuromorphic algorithms that potentially have a wide range of application domains and are applicable to a variety of extremely low power neuromorphic hardware. The work presented in this thesis addresses the challenges of human gesture recognition using novel SNN algorithms. It discusses the design of these algorithms for both visual and auditory domain human gesture recognition as well as event-based pre-processing toolkits for audio signals. From the visual gesture recognition aspect, a novel SNN-based event-driven hand gesture recognition system is proposed. This system is shown to be effective in an experiment on hand gesture recognition with its spiking recurrent convolutional neural network (SCRNN) design, which combines both designed convolution operation and recurrent connectivity to maintain spatial and temporal relations with address-event-representation (AER) data. The proposed SCRNN architecture can achieve arbitrary temporal resolution, which means it can exploit temporal correlations between event collections. This design utilises a backpropagation-based training algorithm and does not suffer from gradient vanishing/explosion problems. From the audio perspective, a novel end-to-end spiking speech emotion recognition system (SER) is proposed. This system employs the MFCC as its main speech feature extractor as well as a self-designed latency coding algorithm to effciently convert the raw signal to AER input that can be used for SNN. A two-layer spiking recurrent architecture is proposed to address temporal correlations between spike trains. The robustness of this system is supported by several open public datasets, which demonstrate state of the arts recognition accuracy and a significant reduction in network size, computational costs, and training speed. In addition to directly contributing to neuromorphic SER, this thesis proposes a novel speech-coding algorithm based on the working mechanism of humans auditory organ system. The algorithm mimics the functionality of the cochlea and successfully provides an alternative method of event-data acquisition for audio-based data. The algorithm is then further simplified and extended into an application of speech enhancement which is jointly used in the proposed SER system. This speech-enhancement method uses the lateral inhibition mechanism as a frequency coincidence detector to remove uncorrelated noise in the time-frequency spectrum. The method is shown to be effective by experiments for up to six types of noise

    Caractérisation des cris des nourrissons en vue du diagnostic précoce de différentes pathologies

    Get PDF
    L’utilisation des signaux de cris dans le diagnostic se base sur les théories qui ont été proposées par les différents chercheurs dans le domaine. Le principal objectif de leurs travaux était l’analyse spectrographique ainsi que la modélisation des signaux de cris. Ils ont démontré que les caractéristiques acoustiques des cris des nouveau-nés sont liées à des conditions médicales particulières. Cette thèse est destinée à contribuer à l’amélioration de la précision de la reconnaissance des cris pathologiques par la combinaison de plusieurs paramètres acoustiques issus de l'analyse spectrographique et des paramètres qui qualifient les cordes et le conduit vocal. Car les caractéristiques acoustiques représentant le conduit vocal ont été largement utilisées pour la classification des cris, alors que les caractéristiques des cordes vocales pour la reconnaissance automatique des cris, ainsi que leurs techniques efficaces d’extraction n’ont pas été exploitées. Pour répondre à cet objectif, nous avons procédé en premier lieu à une caractérisation qualitative des cris des nouveau-nés sains et malades en utilisant les caractéristiques qui ont été définies dans la littérature et qui qualifient le comportement des cordes et du conduit vocal pendant le cri. Cette étape nous a permis d’identifier les caractéristiques les plus importantes dans la différenciation des cris pathologiques étudiés. Pour l’extraction des caractéristiques sélectionnées, nous avons implémenté des méthodes de mesures efficaces permettant de dépasser la surestimation et la sous-estimation des caractéristiques. L’approche de quantification proposée et utilisée dans ce travail facilite l’analyse automatique des cris et permet une utilisation efficace de ces caractéristiques dans le système de diagnostic. Nous avons procédé aussi à des tests expérimentaux pour la validation de toutes les approches introduites dans cette thèse. Les résultats sont satisfaisants et montrent une amélioration dans la reconnaissance des cris par pathologie. Les travaux réalisés sont présentés dans cette thèse sous forme de trois articles publiés dans différents journaux. Deux autres articles publiés dans des comptes rendus de conférences avec comité de lecture sont présentés en annexes

    A survey of the application of soft computing to investment and financial trading

    Get PDF

    Voice Spoofing Countermeasures: Taxonomy, State-of-the-art, experimental analysis of generalizability, open challenges, and the way forward

    Full text link
    Malicious actors may seek to use different voice-spoofing attacks to fool ASV systems and even use them for spreading misinformation. Various countermeasures have been proposed to detect these spoofing attacks. Due to the extensive work done on spoofing detection in automated speaker verification (ASV) systems in the last 6-7 years, there is a need to classify the research and perform qualitative and quantitative comparisons on state-of-the-art countermeasures. Additionally, no existing survey paper has reviewed integrated solutions to voice spoofing evaluation and speaker verification, adversarial/antiforensics attacks on spoofing countermeasures, and ASV itself, or unified solutions to detect multiple attacks using a single model. Further, no work has been done to provide an apples-to-apples comparison of published countermeasures in order to assess their generalizability by evaluating them across corpora. In this work, we conduct a review of the literature on spoofing detection using hand-crafted features, deep learning, end-to-end, and universal spoofing countermeasure solutions to detect speech synthesis (SS), voice conversion (VC), and replay attacks. Additionally, we also review integrated solutions to voice spoofing evaluation and speaker verification, adversarial and anti-forensics attacks on voice countermeasures, and ASV. The limitations and challenges of the existing spoofing countermeasures are also presented. We report the performance of these countermeasures on several datasets and evaluate them across corpora. For the experiments, we employ the ASVspoof2019 and VSDC datasets along with GMM, SVM, CNN, and CNN-GRU classifiers. (For reproduceability of the results, the code of the test bed can be found in our GitHub Repository
    corecore