42,060 research outputs found
Efficient Embedded Speech Recognition for Very Large Vocabulary Mandarin Car-Navigation Systems
Automatic speech recognition (ASR) for a very large vocabulary of isolated words is a difficult task on a resource-limited embedded device. This paper presents a novel fast decoding algorithm for a Mandarin speech recognition system which can simultaneously process hundreds of thousands of items and maintain high recognition accuracy. The proposed algorithm constructs a semi-tree search network based on Mandarin pronunciation rules, to avoid duplicate syllable matching and save redundant memory. Based on a two-stage fixed-width beam-search baseline system, the algorithm employs a variable beam-width pruning strategy and a frame-synchronous word-level pruning strategy to significantly reduce recognition time. This algorithm is aimed at an in-car navigation system in China and simulated on a standard PC workstation. The experimental results show that the proposed method reduces recognition time by nearly 6-fold and memory size nearly 2- fold compared to the baseline system, and causes less than 1% accuracy degradation for a 200,000 word recognition task
RNA Unwinding by the Trf4/Air2/Mtr4 Polyadenylation (TRAMP) Complex
Many RNA-processing events in the cell nucleus involve the Trf4/Air2/Mtr4 polyadenylation (TRAMP) complex, which contains the poly(A) polymerase Trf4p, the Zn-knuckle protein Air2p, and the RNA helicase Mtr4p. TRAMP polyadenylates RNAs designated for processing by the nuclear exosome. In addition, TRAMP functions as an exosome cofactor during RNA degradation, and it has been speculated that this role involves disruption of RNA secondary structure. However, it is unknown whether TRAMP displays RNA unwinding activity. It is also not clear how unwinding would be coordinated with polyadenylation and the function of the RNA helicase Mtr4p in modulating poly(A) addition. Here, we show that TRAMP robustly unwinds RNA duplexes. The unwinding activity of Mtr4p is significantly stimulated by Trf4p/Air2p, but the stimulation of Mtr4p does not depend on ongoing polyadenylation. Nonetheless, polyadenylation enables TRAMP to unwind RNA substrates that it otherwise cannot separate. Moreover, TRAMP displays optimal unwinding activity on substrates with a minimal Mtr4p binding site comprised of adenylates. Our results suggest a model for coordination between unwinding and polyadenylation activities by TRAMP that reveals remarkable synergy between helicase and poly(A) polymerase
RNN Language Model with Word Clustering and Class-based Output Layer
The recurrent neural network language model (RNNLM) has shown significant promise for statistical language modeling. In this work, a new class-based output layer method is introduced to further improve the RNNLM. In this method, word class information is incorporated into the output layer by utilizing the Brown clustering algorithm to estimate a class-based language model. Experimental results show that the new output layer with word clustering not only improves the convergence obviously but also reduces the perplexity and word error rate in large vocabulary continuous speech recognition
Homogenous Ensemble Phonotactic Language Recognition Based on SVM Supervector Reconstruction
Currently, acoustic spoken language recognition (SLR) and phonotactic SLR systems are widely used language recognition systems. To achieve better performance, researchers combine multiple subsystems with the results often much better than a single SLR system. Phonotactic SLR subsystems may vary in the acoustic features vectors or include multiple language-specific phone recognizers and different acoustic models. These methods achieve good performance but usually compute at high computational cost. In this paper, a new diversification for phonotactic language recognition systems is proposed using vector space models by support vector machine (SVM) supervector reconstruction (SSR). In this architecture, the subsystems share the same feature extraction, decoding, and N-gram counting preprocessing steps, but model in a different vector space by using the SSR algorithm without significant additional computation. We term this a homogeneous ensemble phonotactic language recognition (HEPLR) system. The system integrates three different SVM supervector reconstruction algorithms, including relative SVM supervector reconstruction, functional SVM supervector reconstruction, and perturbing SVM supervector reconstruction. All of the algorithms are incorporated using a linear discriminant analysis-maximum mutual information (LDA-MMI) backend for improving language recognition evaluation (LRE) accuracy. Evaluated on the National Institute of Standards and Technology (NIST) LRE 2009 task, the proposed HEPLR system achieves better performance than a baseline phone recognition-vector space modeling (PR-VSM) system with minimal extra computational cost. The performance of the HEPLR system yields 1.39%, 3.63%, and 14.79% equal error rate (EER), representing 6.06%, 10.15%, and 10.53% relative improvements over the baseline system, respectively, for the 30-, 10-, and 3-s test conditions
Summing Radiative Corrections to the Effective Potential
When one uses the Coleman-Weinberg renormalization condition, the effective
potential in the massless theory with O(N) symmetry is
completely determined by the renormalization group functions. It has been shown
how the order renormalization group function determine the sum of all
the N^{\mbox{\scriptsize p}}LL order contribution to to all orders in the
loop expansion. We discuss here how, in addition to fixing the
N^{\mbox{\scriptsize p}}LL contribution to , the order
renormalization group functions also can be used to determine portions of the
N^{\mbox{\scriptsize p+n}}LL contributions to . When these contributions
are summed to all orders, the singularity structure of \mcv is altered. An
alternate rearrangement of the contributions to in powers of ,
when the extremum condition is combined with the
renormalization group equation, show that either or is independent
of . This conclusion is supported by showing the LL, , NLL
contributions to become progressively less dependent on .Comment: 16 pages; added 2 figures and 2 tables; references revise
- …
