39,231 research outputs found

    Multi-biometric templates using fingerprint and voice

    Get PDF
    As biometrics gains popularity, there is an increasing concern about privacy and misuse of biometric data held in central repositories. Furthermore, biometric verification systems face challenges arising from noise and intra-class variations. To tackle both problems, a multimodal biometric verification system combining fingerprint and voice modalities is proposed. The system combines the two modalities at the template level, using multibiometric templates. The fusion of fingerprint and voice data successfully diminishes privacy concerns by hiding the minutiae points from the fingerprint, among the artificial points generated by the features obtained from the spoken utterance of the speaker. Equal error rates are observed to be under 2% for the system where 600 utterances from 30 people have been processed and fused with a database of 400 fingerprints from 200 individuals. Accuracy is increased compared to the previous results for voice verification over the same speaker database

    Multimodal person recognition for human-vehicle interaction

    Get PDF
    Next-generation vehicles will undoubtedly feature biometric person recognition as part of an effort to improve the driving experience. Today's technology prevents such systems from operating satisfactorily under adverse conditions. A proposed framework for achieving person recognition successfully combines different biometric modalities, borne out in two case studies

    Study to determine potential flight applications and human factors design guidelines for voice recognition and synthesis systems

    Get PDF
    A study was conducted to determine potential commercial aircraft flight deck applications and implementation guidelines for voice recognition and synthesis. At first, a survey of voice recognition and synthesis technology was undertaken to develop a working knowledge base. Then, numerous potential aircraft and simulator flight deck voice applications were identified and each proposed application was rated on a number of criteria in order to achieve an overall payoff rating. The potential voice recognition applications fell into five general categories: programming, interrogation, data entry, switch and mode selection, and continuous/time-critical action control. The ratings of the first three categories showed the most promise of being beneficial to flight deck operations. Possible applications of voice synthesis systems were categorized as automatic or pilot selectable and many were rated as being potentially beneficial. In addition, voice system implementation guidelines and pertinent performance criteria are proposed. Finally, the findings of this study are compared with those made in a recent NASA study of a 1995 transport concept

    Mobile phones: a trade-off between speech intelligibility and exposure to noise levels and to radio-frequency electromagnetic fields

    Get PDF
    When making phone calls, cellphone and smartphone users are exposed to radio-frequency (RF) electromagnetic fields (EMFs) and sound pressure simultaneously. Speech intelligibility during mobile phone calls is related to the sound pressure level of speech relative to potential background sounds and also to the RF-EMF exposure, since the signal quality is correlated with the RF-EMF strength. Additionally, speech intelligibility, sound pressure level, and exposure to RF-EMFs are dependent on how the call is made (on speaker, held at the ear, or with headsets). The relationship between speech intelligibility, sound exposure, and exposure to RF-EMFs is determined in this study. To this aim, the transmitted RF-EMF power was recorded during phone calls made by 53 subjects in three different, controlled exposure scenarios: calling with the phone at the ear, calling in speaker mode, and calling with a headset. This emitted power is directly proportional to the exposure to RF EMFs and is translated into specific absorption rate using numerical simulations. Simultaneously, sound pressure levels have been recorded and speech intelligibility has been assessed during each phone call. The results show that exposure to RF-EMFs, quantified as the specific absorption in the head, will be reduced when speaker-mode or a headset is used, in comparison to calling next to the ear. Additionally, personal exposure to sound pressure is also found to be highest in the condition where the phone is held next to the ear. On the other hand, speech perception is found to be the best when calling with a phone next to the ear in comparison to the other studied conditions, when background noise is present
    corecore