64,934 research outputs found

    Finite size effects in neural network algorithms

    Get PDF

    Neural network parametrization of spectral functions from hadronic tau decays and determination of QCD vacuum condensates

    Full text link
    The spectral function ρVA(s)\rho_{V-A}(s) is determined from ALEPH and OPAL data on hadronic tau decays using a neural network parametrization trained to retain the full experimental information on errors, their correlations and chiral sum rules: the DMO sum rule, the first and second Weinberg sum rules and the electromagnetic mass splitting of the pion sum rule. Nonperturbative QCD vacuum condensates can then be determined from finite energy sum rules. Our method minimizes all sources of theoretical uncertainty and bias producing an estimate of the condensates which is independent of the specific finite energy sum rule used. The results for the central values of the condensates O6O_6 and O8O_8 are both negative.Comment: 29 pages, 18 ps figure

    A geometrical analysis of global stability in trained feedback networks

    Get PDF
    Recurrent neural networks have been extensively studied in the context of neuroscience and machine learning due to their ability to implement complex computations. While substantial progress in designing effective learning algorithms has been achieved in the last years, a full understanding of trained recurrent networks is still lacking. Specifically, the mechanisms that allow computations to emerge from the underlying recurrent dynamics are largely unknown. Here we focus on a simple, yet underexplored computational setup: a feedback architecture trained to associate a stationary output to a stationary input. As a starting point, we derive an approximate analytical description of global dynamics in trained networks which assumes uncorrelated connectivity weights in the feedback and in the random bulk. The resulting mean-field theory suggests that the task admits several classes of solutions, which imply different stability properties. Different classes are characterized in terms of the geometrical arrangement of the readout with respect to the input vectors, defined in the high-dimensional space spanned by the network population. We find that such approximate theoretical approach can be used to understand how standard training techniques implement the input-output task in finite-size feedback networks. In particular, our simplified description captures the local and the global stability properties of the target solution, and thus predicts training performance

    Deep Neural Networks for Energy and Position Reconstruction in EXO-200

    Full text link
    We apply deep neural networks (DNN) to data from the EXO-200 experiment. In the studied cases, the DNN is able to reconstruct the relevant parameters - total energy and position - directly from raw digitized waveforms, with minimal exceptions. For the first time, the developed algorithms are evaluated on real detector calibration data. The accuracy of reconstruction either reaches or exceeds what was achieved by the conventional approaches developed by EXO-200 over the course of the experiment. Most existing DNN approaches to event reconstruction and classification in particle physics are trained on Monte Carlo simulated events. Such algorithms are inherently limited by the accuracy of the simulation. We describe a unique approach that, in an experiment such as EXO-200, allows to successfully perform certain reconstruction and analysis tasks by training the network on waveforms from experimental data, either reducing or eliminating the reliance on the Monte Carlo.Comment: Accepted version. 33 pages, 28 figure
    corecore