21,338 research outputs found

    Measurements of KL Branching Fractions and the CP Violation Parameter |eta+-|

    Full text link
    We present new measurements of the six largest branching fractions of the KL using data collected in 1997 by the KTeV experiment (E832) at Fermilab. The results are B(KL -> pi e nu) = 0.4067 +- 0.0011 B(KL -> pi mu nu) = 0.2701 +- 0.0009 B(KL -> pi+ pi- pi0) = 0.1252 +- 0.0007 B(KL -> pi0 pi0 pi0) = 0.1945 +- 0.0018 B(KL -> pi+ pi-) = (1.975 +- 0.012)E-3, and B(KL -> pi0 pi0) = (0.865 +- 0.010)E-3, where statistical and systematic errors have been summed in quadrature. We also determine the CP violation parameter |eta+-| to be (2.228 +- 0.010)E-3. Several of these results are not in good agreement with averages of previous measurements.Comment: Submitted to Phys. Rev. D; 20 pages, 22 figure

    Detailed Study of the KL -> 3pi0 Dalitz Plot

    Get PDF
    Using a sample of 68 million KL -> 3pi0 decays collected in 1996-1999 by the KTeV (E832) experiment at Fermilab, we present a detailed study of the KL -> 3pi0 Dalitz plot density. We report the first observation of interference from KL->pi+pi-pi0 decays in which pi+pi- rescatters to 2pi0 in a final-state interaction. This rescattering effect is described by the Cabibbo-Isidori model, and it depends on the difference in pion scattering lengths between the isospin I=0 and I=2 states, a0-a2. Using the Cabibbo-Isidori model, we present the first measurement of the KL-> 3pi0 quadratic slope parameter that accounts for the rescattering effect.Comment: accepted by Phys. Rev

    Globally Normalized Reader

    Full text link
    Rapid progress has been made towards question answering (QA) systems that can extract answers from text. Existing neural approaches make use of expensive bi-directional attention mechanisms or score all possible answer spans, limiting scalability. We propose instead to cast extractive QA as an iterative search problem: select the answer's sentence, start word, and end word. This representation reduces the space of each search step and allows computation to be conditionally allocated to promising search paths. We show that globally normalizing the decision process and back-propagating through beam search makes this representation viable and learning efficient. We empirically demonstrate the benefits of this approach using our model, Globally Normalized Reader (GNR), which achieves the second highest single model performance on the Stanford Question Answering Dataset (68.4 EM, 76.21 F1 dev) and is 24.7x faster than bi-attention-flow. We also introduce a data-augmentation method to produce semantically valid examples by aligning named entities to a knowledge base and swapping them with new entities of the same type. This method improves the performance of all models considered in this work and is of independent interest for a variety of NLP tasks.Comment: Presented at EMNLP 201
    corecore