24 research outputs found

    Fully Learnable Front-End for Multi-Channel Acoustic Modeling using Semi-Supervised Learning

    Full text link
    In this work, we investigated the teacher-student training paradigm to train a fully learnable multi-channel acoustic model for far-field automatic speech recognition (ASR). Using a large offline teacher model trained on beamformed audio, we trained a simpler multi-channel student acoustic model used in the speech recognition system. For the student, both multi-channel feature extraction layers and the higher classification layers were jointly trained using the logits from the teacher model. In our experiments, compared to a baseline model trained on about 600 hours of transcribed data, a relative word-error rate (WER) reduction of about 27.3% was achieved when using an additional 1800 hours of untranscribed data. We also investigated the benefit of pre-training the multi-channel front end to output the beamformed log-mel filter bank energies (LFBE) using L2 loss. We find that pre-training improves the word error rate by 10.7% when compared to a multi-channel model directly initialized with a beamformer and mel-filter bank coefficients for the front end. Finally, combining pre-training and teacher-student training produces a WER reduction of 31% compared to our baseline.Comment: To appear in ICASSP 202

    Capacity and Coding for 2D Channels

    Get PDF
    Consider a piece of information printed on paper and scanned in the form of an image. The printer, scanner, and the paper naturally form a communication channel, where the printer is equivalent to the sender, scanner is equivalent to the receiver, and the paper is the medium of communication. The channel created in this way is quite complicated and it maps 2D input patterns to 2D output patterns. Inter-symbol interference is introduced in the channel as a result of printing and scanning. During printing, ink from the neighboring pixels can spread out. The scanning process can introduce interference in the data obtained because of the finite size of each pixel and the fact that the scanner doesn't have infinite resolution. Other degradations in the process can be modeled as noise in the system. The scanner may also introduce some spherical aberration due to the lensing effect. Finally, when the image is scanned, it might not be aligned exactly below the scanner, which may lead to rotation and translation of the image. In this work, we present a coding scheme for the channel, and possible solutions for a few of the distortions stated above. Our solution consists of the structure, encoding and decoding scheme for the code, a scheme to undo the rotational distortion, and an equalization method. The motivation behind this is the question: What is the information capacity of paper. The purpose is to find out how much data can be printed out and retrieved successfully. Of course, this question has potential practical impact on the design of 2D bar codes, which is why encodability is a desired feature. There are also a number of other useful applications however. We could successfully decode 41.435 kB of data printed on a paper of size 6.7 X 6.7 inches using a Xerox Phasor 550 printer and a Canon CanoScan LiDE200 scanner. As described in the last chapter, the capacity of the paper using this channel is clearly greater than 0.9230 kB per square inch. The main contribution of the thesis lies in constructing the entire system and testing its performance. Since the focus is on encodable and practically implementable schemes, the proposed encoding method is compared with another well known and easily encodable code, namely the repeat accumulate code

    Multi-Stage Multi-Modal Pre-Training for Automatic Speech Recognition

    Full text link
    Recent advances in machine learning have demonstrated that multi-modal pre-training can improve automatic speech recognition (ASR) performance compared to randomly initialized models, even when models are fine-tuned on uni-modal tasks. Existing multi-modal pre-training methods for the ASR task have primarily focused on single-stage pre-training where a single unsupervised task is used for pre-training followed by fine-tuning on the downstream task. In this work, we introduce a novel method combining multi-modal and multi-task unsupervised pre-training with a translation-based supervised mid-training approach. We empirically demonstrate that such a multi-stage approach leads to relative word error rate (WER) improvements of up to 38.45% over baselines on both Librispeech and SUPERB. Additionally, we share several important findings for choosing pre-training methods and datasets.Comment: Accepted in LREC-COLING 2024 - The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluatio

    Cross-utterance ASR Rescoring with Graph-based Label Propagation

    Full text link
    We propose a novel approach for ASR N-best hypothesis rescoring with graph-based label propagation by leveraging cross-utterance acoustic similarity. In contrast to conventional neural language model (LM) based ASR rescoring/reranking models, our approach focuses on acoustic information and conducts the rescoring collaboratively among utterances, instead of individually. Experiments on the VCTK dataset demonstrate that our approach consistently improves ASR performance, as well as fairness across speaker groups with different accents. Our approach provides a low-cost solution for mitigating the majoritarian bias of ASR systems, without the need to train new domain- or accent-specific models.Comment: To appear in IEEE ICASSP 202

    Turn-taking and Backchannel Prediction with Acoustic and Large Language Model Fusion

    Full text link
    We propose an approach for continuous prediction of turn-taking and backchanneling locations in spoken dialogue by fusing a neural acoustic model with a large language model (LLM). Experiments on the Switchboard human-human conversation dataset demonstrate that our approach consistently outperforms the baseline models with single modality. We also develop a novel multi-task instruction fine-tuning strategy to further benefit from LLM-encoded knowledge for understanding the tasks and conversational contexts, leading to additional improvements. Our approach demonstrates the potential of combined LLMs and acoustic models for a more natural and conversational interaction between humans and speech-enabled AI agents.Comment: To appear in IEEE ICASSP 202
    corecore