89 research outputs found

    Revealing Fundamental Physics from the Daya Bay Neutrino Experiment using Deep Neural Networks

    Full text link
    Experiments in particle physics produce enormous quantities of data that must be analyzed and interpreted by teams of physicists. This analysis is often exploratory, where scientists are unable to enumerate the possible types of signal prior to performing the experiment. Thus, tools for summarizing, clustering, visualizing and classifying high-dimensional data are essential. In this work, we show that meaningful physical content can be revealed by transforming the raw data into a learned high-level representation using deep neural networks, with measurements taken at the Daya Bay Neutrino Experiment as a case study. We further show how convolutional deep neural networks can provide an effective classification filter with greater than 97% accuracy across different classes of physics events, significantly better than other machine learning approaches

    The Athena Data Dictionary and Description Language

    Get PDF
    Athena is the ATLAS off-line software framework, based upon the GAUDI architecture from LHCb. As part of ATLAS' continuing efforts to enhance and customise the architecture to meet our needs, we have developed a data object description tool suite and service for Athena. The aim is to provide a set of tools to describe, manage, integrate and use the Event Data Model at a design level according to the concepts of the Athena framework (use of patterns, relationships, ...). Moreover, to ensure stability and reusability this must be fully independent from the implementation details. After an extensive investigation into the many options, we have developed a language grammar based upon a description language (IDL, ODL) to provide support for object integration in Athena. We have then developed a compiler front end based upon this language grammar, JavaCC, and a Java Reflection API-like interface. We have then used these tools to develop several compiler back ends which meet specific needs in ATLAS such as automatic generation of object converters, and data object scripting interfaces. We present here details of our work and experience to date on the Athena Definition Language and Athena Data Dictionary.Comment: 4 pages, 2 figure

    GMA Instrumentation of the Athena Framework using NetLogger

    Full text link
    Grid applications are, by their nature, wide-area distributed applications. This WAN aspect of Grid applications makes the use of conventional monitoring and instrumentation tools (such as top, gprof, LSF Monitor, etc) impractical for verification that the application is running correctly and efficiently. To be effective, monitoring data must be "end-to-end", meaning that all components between the Grid application endpoints must be monitored. Instrumented applications can generate a large amount of monitoring data, so typically the instrumentation is off by default. For jobs running on a Grid, there needs to be a general mechanism to remotely activate the instrumentation in running jobs. The NetLogger Toolkit Activation Service provides this mechanism. To demonstrate this, we have instrumented the ATLAS Athena Framework with NetLogger to generate monitoring events. We then use a GMA-based activation service to control NetLogger's trigger mechanism. The NetLogger trigger mechanism allows one to easily start, stop, or change the logging level of a running program by modifying a trigger file. We present here details of the design of the NetLogger implementation of the GMA-based activation service and the instrumentation service for Athena. We also describe how this activation service allows us to non-intrusively collect and visualize the ATLAS Athena Framework monitoring data

    Optimization of Software on High Performance Computing Platforms for the LUX-ZEPLIN Dark Matter Experiment

    Full text link
    High Energy Physics experiments like the LUX-ZEPLIN dark matter experiment face unique challenges when running their computation on High Performance Computing resources. In this paper, we describe some strategies to optimize memory usage of simulation codes with the help of profiling tools. We employed this approach and achieved memory reduction of 10-30\%. While this has been performed in the context of the LZ experiment, it has wider applicability to other HEP experimental codes that face these challenges on modern computer architectures.Comment: Contribution to Proceedings of CHEP 2019, Nov 4-8, Adelaide, Australi

    Biases in probabilistic category learning in relation to social anxiety.

    Get PDF
    Instrumental learning paradigms are rarely employed to investigate the mechanisms underlying acquired fear responses in social anxiety. Here, we adapted a probabilistic category learning paradigm to assess information processing biases as a function of the degree of social anxiety traits in a sample of healthy individuals without a diagnosis of social phobia. Participants were presented with three pairs of neutral faces with differing probabilistic accuracy contingencies (A/B: 80/20, C/D: 70/30, E/F: 60/40). Upon making their choice, negative and positive feedback was conveyed using angry and happy faces, respectively. The highly socially anxious group showed a strong tendency to be more accurate at learning the probability contingency associated with the most ambiguous stimulus pair (E/F: 60/40). Moreover, when pairing the most positively reinforced stimulus or the most negatively reinforced stimulus with all the other stimuli in a test phase, the highly socially anxious group avoided the most negatively reinforced stimulus significantly more than the control group. The results are discussed with reference to avoidance learning and hypersensitivity to negative socially evaluative information associated with social anxiety

    The Long-Baseline Neutrino Experiment: Exploring Fundamental Symmetries of the Universe

    Get PDF
    The preponderance of matter over antimatter in the early Universe, the dynamics of the supernova bursts that produced the heavy elements necessary for life and whether protons eventually decay --- these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our Universe, its current state and its eventual fate. The Long-Baseline Neutrino Experiment (LBNE) represents an extensively developed plan for a world-class experiment dedicated to addressing these questions. LBNE is conceived around three central components: (1) a new, high-intensity neutrino source generated from a megawatt-class proton accelerator at Fermi National Accelerator Laboratory, (2) a near neutrino detector just downstream of the source, and (3) a massive liquid argon time-projection chamber deployed as a far detector deep underground at the Sanford Underground Research Facility. This facility, located at the site of the former Homestake Mine in Lead, South Dakota, is approximately 1,300 km from the neutrino source at Fermilab -- a distance (baseline) that delivers optimal sensitivity to neutrino charge-parity symmetry violation and mass ordering effects. This ambitious yet cost-effective design incorporates scalability and flexibility and can accommodate a variety of upgrades and contributions. With its exceptional combination of experimental configuration, technical capabilities, and potential for transformative discoveries, LBNE promises to be a vital facility for the field of particle physics worldwide, providing physicists from around the globe with opportunities to collaborate in a twenty to thirty year program of exciting science. In this document we provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess.Comment: Major update of previous version. This is the reference document for LBNE science program and current status. Chapters 1, 3, and 9 provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess. 288 pages, 116 figure

    Dual Electron Spectrometer for Magnetospheric Multiscale Mission: Results of the Comprehensive Tests of the Engineering Test Unit

    Get PDF
    The Magnetospheric Multiscale mission (MMS) is designed to study fundamental phenomena in space plasma physics such as a magnetic reconnection. The mission consists of four spacecraft, equipped with identical scientific payloads, allowing for the first measurements of fast dynamics in the critical electron diffusion region where magnetic reconnection occurs and charged particles are demagnetized. The MMS orbit is optimized to ensure the spacecraft spend extended periods of time in locations where reconnection is known to occur: at the dayside magnetopause and in the magnetotail. In order to resolve fine structures of the three dimensional electron distributions in the diffusion region (reconnection site), the Fast Plasma Investigation's (FPI) Dual Electron Spectrometer (DES) is designed to measure three dimensional electron velocity distributions with an extremely high time resolution of 30 ms. In order to achieve this unprecedented sampling rate, four dual spectrometers, each sampling 180 x 45 degree sections of the sky, are installed on each spacecraft. We present results of the comprehensive tests performed on the DES Engineering & Test Unit (ETU). This includes main parameters of the spectrometer such as energy resolution, angular acceptance, and geometric factor along with their variations over the 16 pixels spanning the 180-degree tophat Electro Static Analyzer (ESA) field of view and over the energy of the test beam. A newly developed method for precisely defining the operational space of the instrument is presented as well. This allows optimization of the trade-off between pixel to pixel crosstalk and uniformity of the main spectrometer parameters

    Examining the Link Between Domestic Violence Victimization and Loneliness in a Dutch Community Sample: A Comparison Between Victims and Nonvictims by Type D Personality

    Get PDF
    The current study investigated whether differences in loneliness scores between individuals with a distressed personality type (type D personality) and subjects without such a personality varied by domestic violence victimization. Participants (N = 625) were recruited by random sampling from the Municipal Basic Administration of the Dutch city of ‘s-Hertogenbosch and were invited to fill out a set of questionnaires on health status. For this study, only ratings for domestic violence victimization, type D personality, feelings of loneliness, and demographics were used. Statistical analyses yielded main effects on loneliness for both type D personality and history of domestic violence victimization. Above and beyond these main effects, their interaction was significantly associated with loneliness as well. However, this result seemed to apply to emotional loneliness in particular. Findings were discussed in light of previous research and study limitations

    HEP Science Network Requirements--Final Report

    Get PDF
    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the US Department of Energy Office of Science, the single largest supporter of basic research in the physical sciences in the United States. In support of the Office of Science programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet to be a highly successful enabler of scientific discovery for over 20 years. In August 2009 ESnet and the Office of High Energy Physics (HEP), of the DOE Office of Science, organized a workshop to characterize the networking requirements of the programs funded by HEP. The International HEP community has been a leader in data intensive science from the beginning. HEP data sets have historically been the largest of all scientific data sets, and the communty of interest the most distributed. The HEP community was also the first to embrace Grid technologies. The requirements identified at the workshop are summarized below, and described in more detail in the case studies and the Findings section: (1) There will be more LHC Tier-3 sites than orginally thought, and likely more Tier-2 to Tier-2 traffic than was envisioned. It it not yet known what the impact of this will be on ESnet, but we will need to keep an eye on this traffic. (2) The LHC Tier-1 sites (BNL and FNAL) predict the need for 40-50 Gbps of data movement capacity in 2-5 years, and 100-200 Gbps in 5-10 years for HEP program related traffic. Other key HEP sites include LHC Tier-2 and Tier-3 sites, many of which are located at universities. To support the LHC, ESnet must continue its collaborations with university and international networks. (3) While in all cases the deployed 'raw' network bandwidth must exceed the user requirements in order to meet the data transfer and reliability requirements, network engineering for trans-Atlantic connectivity is more complex than network engineering for intra-US connectivity. This is because transoceanic circuits have lower reliability and longer repair times when compared with land-based circuits. Therefore, trans-Atlantic connectivity requires greater deployed bandwidth and diversity to ensure reliability and service continuity of the user-level required data transfer rates. (4) Trans-Atlantic traffic load and patterns must be monitored, and projections adjusted if necessary. There is currently a shutdown planned for the LHC in 2012 that may affect projections of trans-Atlantic bandwidth requirements. (5) There is a significant need for network tuning and troubleshooting during the establishment of new LHC Tier-2 and Tier-3 facilities. ESnet will work with the HEP community to help new sites effectively use the network. (6) SLAC is building the CCD camera for the LSST. This project will require significant bandwidth (up to 30Gbps) to NCSA over the next few years. (7) The accelerator modeling program at SLAC could require the movement of 1PB simulation data sets from the Leadership Computing Facilities at Argonne and Oak Ridge to SLAC. The data sets would need to be moved overnight, and moving 1PB in eight hours requires more than 300Gbps of throughput. This requirement is dependent on the deployment of analysis capabilities at SLAC, and is about five years away. (8) It is difficult to achieve high data transfer throughput to sites in China. Projects that need to transfer data in or out of China are encouraged to deploy test and measurement infrastructure (e.g. perfSONAR) and allow time for performance tuning
    • …
    corecore