2 research outputs found

    Automatic Speech Recognition in Mobile Customer Care Service

    Get PDF
    In the project an automatic speech system is used in mobile customer care   services. In existing  mobile  customer care  services, customer  have  to  wait for 4 to 5 minutes  to get  into the  option  what   they  want to  inquire. Based on the requirement, we go for filtering the incoming calls. Persons who require particular data are dynamically move to speech recognition system that identifies the type of the enquiry chosen. Speech recognition is the one which dynamically identifies the individual speaking based on analyzing the speech waves. It helps in identifying the voice of the speaker to know the recognized user. It also helps in accessing services like telephone banking, mobile shopping, database services and securing the information which is confidential

    IMPROVING ROBUSTNESS AGAINST REVERBERATION FOR AUTOMATIC SPEECH RECOGNITION

    No full text
    ABSTRACT Reverberation is a phenomenon observed in almost all enclosed environments. Human listeners rarely experience problems in comprehending speech in reverberant environments, but automatic speech recognition (ASR) systems often suffer increased error rates under such conditions. In this work, we explore the role of robust acoustic features motivated by human speech perception studies, for building ASR systems robust to reverberation effects. Using the dataset distributed for the "Automatic Speech Recognition In Reverberant Environments" (ASpIRE-2015) challenge organized by IARPA, we explore Gaussian mixture models (GMMs), deep neural nets (DNNs) and convolutional deep neural networks (CDNN) as candidate acoustic models for recognizing continuous speech in reverberant environments. We demonstrate that DNN-based systems trained with robust features offer significant reduction in word error rates (WERs) compared to systems trained with baseline mel-filterbank features. We present a novel time-frequency convolution neural net (TFCNN) framework that performs convolution on the feature space across both the time and frequency scales, which we found to consistently outperform the CDNN systems for all feature sets across all testing conditions. Finally, we show that further WER reduction is achievable through system fusion of n-best lists from multiple systems
    corecore