8,696 research outputs found

    Large Margin Neural Language Model

    Full text link
    We propose a large margin criterion for training neural language models. Conventionally, neural language models are trained by minimizing perplexity (PPL) on grammatical sentences. However, we demonstrate that PPL may not be the best metric to optimize in some tasks, and further propose a large margin formulation. The proposed method aims to enlarge the margin between the "good" and "bad" sentences in a task-specific sense. It is trained end-to-end and can be widely applied to tasks that involve re-scoring of generated text. Compared with minimum-PPL training, our method gains up to 1.1 WER reduction for speech recognition and 1.0 BLEU increase for machine translation.Comment: 9 pages. Accepted as a long paper in EMNLP201

    Dirac series of E7(βˆ’5)E_{7(-5)}

    Full text link
    Using the sharpened Helgason-Johnson bound, this paper classifies all the irreducible unitary representations with non-zero Dirac cohomology of E7(βˆ’5)E_{7(-5)}. As an application, we find that the cancellation between the even part and the odd part of the Dirac cohomology continues to happen for certain unitary representations of E7(βˆ’5)E_{7(-5)}. Assuming the infinitesimal character being integral, we further improve the Helgason-Johnson bound for E7(βˆ’5)E_{7(-5)}. This should help people to understand (part of) the unitary dual of this group.Comment: 25 pages. arXiv admin note: text overlap with arXiv:2204.0790
    • …
    corecore