144 research outputs found

    Deep Generative Variational Autoencoding for Replay Spoof Detection in Automatic Speaker Verification

    Get PDF
    Automatic speaker verification (ASV) systems are highly vulnerable to presentation attacks, also called spoofing attacks. Replay is among the simplest attacks to mount - yet difficult to detect reliably. The generalization failure of spoofing countermeasures (CMs) has driven the community to study various alternative deep learning CMs. The majority of them are supervised approaches that learn a human-spoof discriminator. In this paper, we advocate a different, deep generative approach that leverages from powerful unsupervised manifold learning in classification. The potential benefits include the possibility to sample new data, and to obtain insights to the latent features of genuine and spoofed speech. To this end, we propose to use variational autoencoders (VAEs) as an alternative backend for replay attack detection, via three alternative models that differ in their class-conditioning. The first one, similar to the use of Gaussian mixture models (GMMs) in spoof detection, is to train independently two VAEs - one for each class. The second one is to train a single conditional model (C-VAE) by injecting a one-hot class label vector to the encoder and decoder networks. Our final proposal integrates an auxiliary classifier to guide the learning of the latent space. Our experimental results using constant-Q cepstral coefficient (CQCC) features on the ASVspoof 2017 and 2019 physical access subtask datasets indicate that the C-VAE offers substantial improvement in comparison to training two separate VAEs for each class. On the 2019 dataset, the C-VAE outperforms the VAE and the baseline GMM by an absolute 9-10% in both equal error rate (EER) and tandem detection cost function (t-DCF) metrics. Finally, we propose VAE residuals --- the absolute difference of the original input and the reconstruction as features for spoofing detection. The proposed frontend approach augmented with a convolutional neural network classifier demonstrated substantial improvement over the VAE backend use case

    Embedded Based Smart ICU-For Intelligent Patient Monitoring

    Get PDF
    Smart ICUs are networks of audio-visual communication and computer systems that link critical care doctors and nurses (intensivists) to intensive care units (ICUs) in other, remote hospitals. The intensivists in the “command center” can communicate by voice with the remote ICU personnel and can receive video communication and clinical data about the patients. Direct patient care is provided by the doctors and nurses in the remote ICU who do not have to be intensivists themselves. In recent years there has been an increase in the number of patients needing ICU care without a corresponding increase in the supply of intensivists. Smart ICUs can be a valuable resource for hospitals faced with the need to expand capacity and improve care for a growing elderly population. Evidence from some early-adopter hospitals indicates that it can leverage management of patient care by intensivists, reduce mortality rates, and reduce LOS. However, positive outcomes appear to depend on the organizational environment into which the Smart ICU is introduced. The dramatic improvements in mortality and LOS reported by some early-adopter hospitals have not been matched in most. The limited research available suggests that the best outcomes may occur in ICUs that: Can make organizational arrangements to support the management of patient care by intensivists using Smart ICU; Have little or no intensivist staff available to them in the absence of Smart ICU; Have relatively high severity-adjusted mortality and LOS rates; Are located in remote or rural areas where safe and efficient transfer of patients to regional centers for advanced critical care presents difficulties. Smart ICU connects a central command center staffed by intensivists with patients in distant ICUs. Continuous, real-time audio, video, and electronic reports of vital signs connect the command center to the patients’ bedsides. Computer-managed decision support systems track each patient’s status and give alerts when negative trends are detected and when changes in treatment patterns are scheduled. The patient data include physiological status (e.g., ECG and blood oxygenation), treatment (e.g., the infusion rate for a specific medicine or the settings on a respirator), and medical records.

    Subband modeling for spoofing detection in automatic speaker verification

    Get PDF
    Spectrograms - time-frequency representations of audio signals - have found widespread use in neural network-based spoofing detection. While deep models are trained on the fullband spectrum of the signal, we argue that not all frequency bands are useful for these tasks. In this paper, we systematically investigate the impact of different subbands and their importance on replay spoofing detection on two benchmark datasets: ASVspoof 2017 v2.0 and ASVspoof 2019 PA. We propose a joint subband modelling framework that employs n different sub-networks to learn subband specific features. These are later combined and passed to a classifier and the whole network weights are updated during training. Our findings on the ASVspoof 2017 dataset suggest that the most discriminative information appears to be in the first and the last 1 kHz frequency bands, and the joint model trained on these two subbands shows the best performance outperforming the baselines by a large margin. However, these findings do not generalise on the ASVspoof 2019 PA dataset. This suggests that the datasets available for training these models do not reflect real world replay conditions suggesting a need for careful design of datasets for training replay spoofing countermeasures

    Analysing the predictions of a CNN-based replay spoofing detection system

    Get PDF
    Playing recorded speech samples of an enrolled speaker - "replay attack" - is a simple approach to bypass an automatic speaker verification (ASV) system. The vulnerability of ASV systems to such attacks has been acknowledged and studied, but there has been no research into what spoofing detection systems are actually learning to discriminate. In this paper, we analyse the local behaviour of a replay spoofing detection system based on convolutional neural networks (CNNs) adapted from a state-of-the-art CNN (LCNN-FFT) submitted at the ASVspoof 2017 challenge. We generate temporal and spectral explanations for predictions of the model using the SLIME algorithm. Our findings suggest that in most instances of spoofing the model is using information in the first 400 milliseconds of each audio instance to make the class prediction. Knowledge of the characteristics that spoofing detection systems are exploiting can help build less vulnerable ASV systems, other spoofing detection systems, as well as better evaluation databases

    Ensemble Models for Spoofing Detection in Automatic Speaker Verification

    Get PDF
    Detecting spoofing attempts of automatic speaker verification (ASV) systems is challenging, especially when using only one modelling approach. For robustness, we use both deep neural networks and traditional machine learning models and combine them as ensemble models through logistic regression. They are trained to detect logical access (LA) and physical access (PA) attacks on the dataset released as part of the ASV Spoofing and Countermeasures Challenge 2019. We propose dataset partitions that ensure different attack types are present during training and validation to improve system robustness. Our ensemble model outperforms all our single models and the baselines from the challenge for both attack types. We investigate why some models on the PA dataset strongly outperform others and find that spoofed recordings in the dataset tend to have longer silences at the end than genuine ones. By removing them, the PA task becomes much more challenging, with the tandem detection cost function (t-DCF) of our best single model rising from 0.1672 to 0.5018 and equal error rate (EER) increasing from 5.98% to 19.8% on the development set
    • …
    corecore