407 research outputs found

    Privacy-preserving iVector-based speaker verification

    Get PDF
    This work introduces an efficient algorithm to develop a privacy-preserving (PP) voice verification based on iVector and linear discriminant analysis techniques. This research considers a scenario in which users enrol their voice biometric to access different services (i.e., banking). Once enrolment is completed, users can verify themselves using their voice-print instead of alphanumeric passwords. Since a voice-print is unique for everyone, storing it with a third-party server raises several privacy concerns. To address this challenge, this work proposes a novel technique based on randomisation to carry out voice authentication, which allows the user to enrol and verify their voice in the randomised domain. To achieve this, the iVector based voice verification technique has been redesigned to work on the randomised domain. The proposed algorithm is validated using a well known speech dataset. The proposed algorithm neither compromises the authentication accuracy nor adds additional complexity due to the randomisation operations

    Multi-biometric templates using fingerprint and voice

    Get PDF
    As biometrics gains popularity, there is an increasing concern about privacy and misuse of biometric data held in central repositories. Furthermore, biometric verification systems face challenges arising from noise and intra-class variations. To tackle both problems, a multimodal biometric verification system combining fingerprint and voice modalities is proposed. The system combines the two modalities at the template level, using multibiometric templates. The fusion of fingerprint and voice data successfully diminishes privacy concerns by hiding the minutiae points from the fingerprint, among the artificial points generated by the features obtained from the spoken utterance of the speaker. Equal error rates are observed to be under 2% for the system where 600 utterances from 30 people have been processed and fused with a database of 400 fingerprints from 200 individuals. Accuracy is increased compared to the previous results for voice verification over the same speaker database

    Voice Verification System Based on Bark-frequency Cepstral Coefficient

    Get PDF
    Data verification systems evolve towards a more natural system using biometric media. In daily interactions, human use voice as a tool to communicate with others. Voice charactheristic is also used as a tool to identify subjects who are speaking. The problem is that background noise and signal characteristics of each person which is unique, cause speaker classification process becomes more complex. To identify the speaker, we need to understand the speech signal feature extraction process. We developed the technology to extract voice characteristics of each speaker based on spectral analysis. This research is useful for the development of biometric-based security application. At first, the voice signal will be separated by a pause signal using voice activity detection. Then the voice characteristic will be extracted using a bark-frequency cepstral coefficient. Set of cepstral will be classified according to the speaker, using artificial neural network. The accuracy reached about 82% in voice recognition process with 10 speakers, meanwhile, the highest accuracy was 93% with only 1 speaker.

    Voice recognition through the use of Gabor transform and heuristic algorithm

    Get PDF
    Increasingly popular use of verification methods based on specific characteristics of people like eyeball, fingerprint or voice makes inventing more accurate and irrefutable methods of that urgent. In this work we present the voice verification based on Gabor transformation. The proposed approach involves creation of spectrogram, which serves as a habitat for the population of selected heuristic algorithm. The use of heuristic allows for the features extraction to enable identity verification using classical neural network. The results of the research are presented and discussed to show efficiency of the proposed methodology

    CALIPER: Continuous Authentication Layered with Integrated PKI Encoding Recognition

    Full text link
    Architectures relying on continuous authentication require a secure way to challenge the user's identity without trusting that the Continuous Authentication Subsystem (CAS) has not been compromised, i.e., that the response to the layer which manages service/application access is not fake. In this paper, we introduce the CALIPER protocol, in which a separate Continuous Access Verification Entity (CAVE) directly challenges the user's identity in a continuous authentication regime. Instead of simply returning authentication probabilities or confidence scores, CALIPER's CAS uses live hard and soft biometric samples from the user to extract a cryptographic private key embedded in a challenge posed by the CAVE. The CAS then uses this key to sign a response to the CAVE. CALIPER supports multiple modalities, key lengths, and security levels and can be applied in two scenarios: One where the CAS must authenticate its user to a CAVE running on a remote server (device-server) for access to remote application data, and another where the CAS must authenticate its user to a locally running trusted computing module (TCM) for access to local application data (device-TCM). We further demonstrate that CALIPER can leverage device hardware resources to enable privacy and security even when the device's kernel is compromised, and we show how this authentication protocol can even be expanded to obfuscate direct kernel object manipulation (DKOM) malwares.Comment: Accepted to CVPR 2016 Biometrics Worksho

    Synthetic speech detection and audio steganography in VoIP scenarios

    Get PDF
    The distinction between synthetic and human voice uses the techniques of the current biometric voice recognition systems, which prevent that a person’s voice, no matter if with good or bad intentions, can be confused with someone else’s. Steganography gives the possibility to hide in a file without a particular value (usually audio, video or image files) a hidden message in such a way as to not rise suspicion to any external observer. This article suggests two methods, applicable in a VoIP hypothetical scenario, which allow us to distinguish a synthetic speech from a human voice, and to insert within the Comfort Noise a text message generated in the pauses of a voice conversation. The first method takes up the studies already carried out for the Modulation Features related to the temporal analysis of the speech signals, while the second one proposes a technique that derives from the Direct Sequence Spread Spectrum, which consists in distributing the signal energy to hide on a wider band transmission. Due to space limits, this paper is only an extended abstract. The full version will contain further details on our research
    corecore