Article thumbnail

ENHANCED SPEAKER RECOGNITION BASED ON INTRA-MODAL FUSION AND ACCENT MODELING

By Srikanth Mangayyagari, Tanmoy Islam and Ravi Sankar

Abstract

Speaker recognition systems, even though they have been around for four decades, have not been widely considered as standalone systems for biometric security because of their unacceptably low performance, i.e., high false acceptance and rejection. Research has shown that speaker recognition performance can be enhanced through hybrid fusion (HF) of likelihood scores generated by arithmetic harmonic sphericity (AHS) and hidden Markov model (HMM) techniques [1]. Performance improvements of 22 % and 6 % true acceptance rate (TAR) at 5 % false acceptance rate (FAR) were observed, when evaluated on two different datasets – YOHO and USF multi-modal biometric dataset respectively. In this paper, we present a model that combines accent information from an accent classification (AC) system with HF system in order to further increase the speaker recognition rate. The proposed system achieved performance improvements of 17 % and 15 % TAR at an FAR of 3 % when evaluated on SAA and USF datasets. The accent incorporation method discussed in this work can also be applied to any other speaker recognition system

Topics: Index Terms — Speaker recognition, Accent classification, HMM, GMM, Fusion, Biometrics
Year: 2012
OAI identifier: oai:CiteSeerX.psu:10.1.1.214.2608
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • http://figment.cse.usf.edu/~sf... (external link)

  • To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.

    Suggested articles