12,235 research outputs found

    Lattice calculation of hadronic tensor of the nucleon

    Full text link
    We report an attempt to calculate the deep inelastic scattering structure functions from the hadronic tensor calculated on the lattice. We used the Backus-Gilbert reconstruction method to address the inverse Laplace transformation for the analytic continuation from the Euclidean to the Minkowski space.Comment: 8 pages, 5 figures; Proceedings of the 35th International Symposium on Lattice Field Theory, 18-24 June 2017, Granada, Spai

    Variance Reduction and Cluster Decomposition

    Get PDF
    It is a common problem in lattice QCD calculation of the mass of the hadron with an annihilation channel that the signal falls off in time while the noise remains constant. In addition, the disconnected insertion calculation of the three-point function and the calculation of the neutron electric dipole moment with the θ\theta term suffer from a noise problem due to the V\sqrt{V} fluctuation. We identify these problems to have the same origin and the V\sqrt{V} problem can be overcome by utilizing the cluster decomposition principle. We demonstrate this by considering the calculations of the glueball mass, the strangeness content in the nucleon, and the CP violation angle in the nucleon due to the θ\theta term. It is found that for lattices with physical sizes of 4.5 - 5.5 fm, the statistical errors of these quantities can be reduced by a factor of 3 to 4. The systematic errors can be estimated from the Akaike information criterion. For the strangeness content, we find that the systematic error is of the same size as that of the statistical one when the cluster decomposition principle is utilized. This results in a 2 to 3 times reduction in the overall error.Comment: 7 pages, 5 figures, appendix added to address the systematic erro

    Latent Class Model with Application to Speaker Diarization

    Get PDF
    In this paper, we apply a latent class model (LCM) to the task of speaker diarization. LCM is similar to Patrick Kenny's variational Bayes (VB) method in that it uses soft information and avoids premature hard decisions in its iterations. In contrast to the VB method, which is based on a generative model, LCM provides a framework allowing both generative and discriminative models. The discriminative property is realized through the use of i-vector (Ivec), probabilistic linear discriminative analysis (PLDA), and a support vector machine (SVM) in this work. Systems denoted as LCM-Ivec-PLDA, LCM-Ivec-SVM, and LCM-Ivec-Hybrid are introduced. In addition, three further improvements are applied to enhance its performance. 1) Adding neighbor windows to extract more speaker information for each short segment. 2) Using a hidden Markov model to avoid frequent speaker change points. 3) Using an agglomerative hierarchical cluster to do initialization and present hard and soft priors, in order to overcome the problem of initial sensitivity. Experiments on the National Institute of Standards and Technology Rich Transcription 2009 speaker diarization database, under the condition of a single distant microphone, show that the diarization error rate (DER) of the proposed methods has substantial relative improvements compared with mainstream systems. Compared to the VB method, the relative improvements of LCM-Ivec-PLDA, LCM-Ivec-SVM, and LCM-Ivec-Hybrid systems are 23.5%, 27.1%, and 43.0%, respectively. Experiments on our collected database, CALLHOME97, CALLHOME00 and SRE08 short2-summed trial conditions also show that the proposed LCM-Ivec-Hybrid system has the best overall performance
    • …
    corecore