18,658 research outputs found

    Homogenous Ensemble Phonotactic Language Recognition Based on SVM Supervector Reconstruction

    Get PDF
    Currently, acoustic spoken language recognition (SLR) and phonotactic SLR systems are widely used language recognition systems. To achieve better performance, researchers combine multiple subsystems with the results often much better than a single SLR system. Phonotactic SLR subsystems may vary in the acoustic features vectors or include multiple language-specific phone recognizers and different acoustic models. These methods achieve good performance but usually compute at high computational cost. In this paper, a new diversification for phonotactic language recognition systems is proposed using vector space models by support vector machine (SVM) supervector reconstruction (SSR). In this architecture, the subsystems share the same feature extraction, decoding, and N-gram counting preprocessing steps, but model in a different vector space by using the SSR algorithm without significant additional computation. We term this a homogeneous ensemble phonotactic language recognition (HEPLR) system. The system integrates three different SVM supervector reconstruction algorithms, including relative SVM supervector reconstruction, functional SVM supervector reconstruction, and perturbing SVM supervector reconstruction. All of the algorithms are incorporated using a linear discriminant analysis-maximum mutual information (LDA-MMI) backend for improving language recognition evaluation (LRE) accuracy. Evaluated on the National Institute of Standards and Technology (NIST) LRE 2009 task, the proposed HEPLR system achieves better performance than a baseline phone recognition-vector space modeling (PR-VSM) system with minimal extra computational cost. The performance of the HEPLR system yields 1.39%, 3.63%, and 14.79% equal error rate (EER), representing 6.06%, 10.15%, and 10.53% relative improvements over the baseline system, respectively, for the 30-, 10-, and 3-s test conditions

    Signals of a Light Dark Force in the Galactic Center

    Get PDF
    Recent evidence for an excess of gamma rays in the GeV energy range about the Galactic Center have refocused attention on models of dark matter in the low mass regime (mχ≲mZ/2m_\chi \lesssim m_Z/2). Because this is an experimentally well-trod energy range, it can be a challenge to develop simple models that explain this excess, consistent with other experimental constraints. We reconsider models where the dark matter couples to dark photon, which has a weak kinetic mixing to the Standard Model photon, or scalars with a weak mixing with the Higgs boson. We focus on the light (≲1.5GeV\lesssim 1.5 GeV) dark mediator mass regime. Annihilations into the dark mediators can produce observable gamma rays through decays to π0\pi^0, through radiative processes when decaying to charged particles (e+e−,μ+μ−,...e^+e^-, \mu^+\mu^-,...), and subsequent interactions of high energy e+e−e^+e^- with gas and light. However, these models have no signals of pˉ\bar p production, which is kinematically forbidden. We find that in these models, the shape of resulting gamma-ray spectrum can provide a good fit to the excess at Galactic Center. We discuss further constraints from AMS-02, and find regions of compatibility.Comment: 39 pages, 14 figures, references updated and discussion of CMB constraints include

    Analytical Solution for the SU(2) Hedgehog Skyrmion and Static Properties of Nucleons

    Full text link
    An analytical solution for symmetric Skyrmion was proposed for the SU(2) Skyrme model, which take the form of the hybrid form of a kink-like solution and that given by the instanton method. The static properties of nucleons was then computed within the framework of collective quantization of the Skyrme model, with a good agreement with that given by the exact numeric solution. The comparisons with the previous results as well as the experimental values are also given.Comment: 4 pages, 2 figures, submited to Phys.Lett.

    RNN Language Model with Word Clustering and Class-based Output Layer

    Get PDF
    The recurrent neural network language model (RNNLM) has shown significant promise for statistical language modeling. In this work, a new class-based output layer method is introduced to further improve the RNNLM. In this method, word class information is incorporated into the output layer by utilizing the Brown clustering algorithm to estimate a class-based language model. Experimental results show that the new output layer with word clustering not only improves the convergence obviously but also reduces the perplexity and word error rate in large vocabulary continuous speech recognition
    • …
    corecore