9,964 research outputs found

    Third-Party Aligner for Neural Word Alignments

    Full text link
    Word alignment is to find translationally equivalent words between source and target sentences. Previous work has demonstrated that self-training can achieve competitive word alignment results. In this paper, we propose to use word alignments generated by a third-party word aligner to supervise the neural word alignment training. Specifically, source word and target word of each word pair aligned by the third-party aligner are trained to be close neighbors to each other in the contextualized embedding space when fine-tuning a pre-trained cross-lingual language model. Experiments on the benchmarks of various language pairs show that our approach can surprisingly do self-correction over the third-party supervision by finding more accurate word alignments and deleting wrong word alignments, leading to better performance than various third-party word aligners, including the currently best one. When we integrate all supervisions from various third-party aligners, we achieve state-of-the-art word alignment performances, with averagely more than two points lower alignment error rates than the best third-party aligner. We released our code at https://github.com/sdongchuanqi/Third-Party-Supervised-Aligner.Comment: 12 pages, 4 figures, findings of emnlp 202

    Pairing fluctuations and gauge symmetry restoration in rotating superfluid nuclei

    Full text link
    Rapidly rotating nuclei provide us good testing grounds to study the pairing correlations; in fact, the transition from the superfluid to the normal phase is realized at high-spin states. The role played by the pairing correlations is quite different in these two phases: The static (BCS like mean-field) contribution is dominant in the superfluid phase, while the dynamic fluctuations beyond the mean-field approximation are important in the normal phase. The influence of the pairing fluctuations on the high-spin rotational spectra and moments of inertia is discussed.Comment: 14 pages, 5 figures, a contribution to the book "50 Years of Nuclear BCS", edited by R.A.Broglia and V.Zelevinsk

    Parsing Speech: A Neural Approach to Integrating Lexical and Acoustic-Prosodic Information

    Full text link
    In conversational speech, the acoustic signal provides cues that help listeners disambiguate difficult parses. For automatically parsing spoken utterances, we introduce a model that integrates transcribed text and acoustic-prosodic features using a convolutional neural network over energy and pitch trajectories coupled with an attention-based recurrent neural network that accepts text and prosodic features. We find that different types of acoustic-prosodic features are individually helpful, and together give statistically significant improvements in parse and disfluency detection F1 scores over a strong text-only baseline. For this study with known sentence boundaries, error analyses show that the main benefit of acoustic-prosodic features is in sentences with disfluencies, attachment decisions are most improved, and transcription errors obscure gains from prosody.Comment: Accepted in NAACL HLT 201
    corecore