40 research outputs found

    New science on the Open Science Grid

    Get PDF
    The Open Science Grid (OSG) includes work to enable new science, new scientists, and new modalities in support of computationally based research. There are frequently significant sociological and organizational changes required in transformation from the existing to the new. OSG leverages its deliverables to the large-scale physics experiment member communities to benefit new communities at all scales through activities in education, engagement, and the distributed facility. This paper gives both a brief general description and specific examples of new science enabled on the OSG. More information is available at the OSG web site: www.opensciencegrid.org

    High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

    Full text link
    Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -- 1) software effectiveness, and 2) infrastructure and expertise advancement. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. The final versions of the reports are combined in this document, and are presented along with introductory material.Comment: 72 page

    The Long-Baseline Neutrino Experiment: Exploring Fundamental Symmetries of the Universe

    Get PDF
    The preponderance of matter over antimatter in the early Universe, the dynamics of the supernova bursts that produced the heavy elements necessary for life and whether protons eventually decay --- these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our Universe, its current state and its eventual fate. The Long-Baseline Neutrino Experiment (LBNE) represents an extensively developed plan for a world-class experiment dedicated to addressing these questions. LBNE is conceived around three central components: (1) a new, high-intensity neutrino source generated from a megawatt-class proton accelerator at Fermi National Accelerator Laboratory, (2) a near neutrino detector just downstream of the source, and (3) a massive liquid argon time-projection chamber deployed as a far detector deep underground at the Sanford Underground Research Facility. This facility, located at the site of the former Homestake Mine in Lead, South Dakota, is approximately 1,300 km from the neutrino source at Fermilab -- a distance (baseline) that delivers optimal sensitivity to neutrino charge-parity symmetry violation and mass ordering effects. This ambitious yet cost-effective design incorporates scalability and flexibility and can accommodate a variety of upgrades and contributions. With its exceptional combination of experimental configuration, technical capabilities, and potential for transformative discoveries, LBNE promises to be a vital facility for the field of particle physics worldwide, providing physicists from around the globe with opportunities to collaborate in a twenty to thirty year program of exciting science. In this document we provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess.Comment: Major update of previous version. This is the reference document for LBNE science program and current status. Chapters 1, 3, and 9 provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess. 288 pages, 116 figure

    Research and Development for Near Detector Systems Towards Long Term Evolution of Ultra-precise Long-baseline Neutrino Experiments

    Get PDF
    With the discovery of non-zero value of θ13\theta_{13} mixing angle, the next generation of long-baseline neutrino (LBN) experiments offers the possibility of obtaining statistically significant samples of muon and electron neutrinos and anti-neutrinos with large oscillation effects. In this document we intend to highlight the importance of Near Detector facilities in LBN experiments to both constrain the systematic uncertainties affecting oscillation analyses but also to perform, thanks to their close location, measurements of broad benefit for LBN physics goals. A strong European contribution to these efforts is possible

    The Prompt Processing System and Data Quality Monitoring in the ProtoDUNE-SP Experiment

    No full text
    The DUNE Collaboration is conducting an experimental program (named protoDUNE) which involves a beam test of two large-scale prototypes of the DUNE Far Detector at CERN operating in 2018-2019. The volume of data to be collected by the protoDUNE-SP (the single-phase detector) will amount to a few petabytes and the sustained rate of data sent to mass storage is in the range of a few hundred MB per second. After collection the data is committed to storage at CERN and transmitted to Fermi National Accelerator Laboratory in the US for processing, analysis and long-term preservation. The protoDUNE experiment requires substantial Data Quality Monitoring capabilities in order to ascertain the condition of the detector and its various subsystems. We present the design of the protoDUNE Prompt Processing System, its deployment at CERN and its performance during the data challenges and actual data taking

    Evolution of the Data Quality Monitoring and Prompt Processing System in the proto DUNE-SP experiment

    Get PDF
    The DUNE Collaboration currently operates an experimental program based at CERN which includes a beam test and an extended cosmic ray run of two large-scale prototypes of the massive Liquid Argon Time Projection Chamber (LArTPC) for the DUNE Far Detector. The volume of data collected by the single-phase prototype (protoDUNE-SP) amounts to 3PB and the sustained rate of data sent to mass storage is O(100) MB/s. Data Quality Monitoring was implemented by directing a fraction of the data stream to the protoDUNE prompt processing system (p3s) which is optimized for continuous low-latency calculation of the vital detector metrics and various graphics including event displays. It served a crucial role throughout the life cycle of the experiment. We present our experience in leveraging the CERN computing environment and operating the system over an extended period of time, while adapting to evolving requirements and computing platform

    The protoDUNE-SP experiment and its prompt processing system

    No full text
    The Deep Underground Neutrino Experiment (DUNE) will employ a uniquely large Liquid Argon Time Projection Chamber with four separate modules of 10kt fiducial mass each. Different modules will utilize single and dual-phase Liquid Argon technologies. An experimental program ("protoDUNE") has been initiated which includes a beam and cosmic ray test of large-scale DUNE prototypes at CERN in 2018. The volume of data to be collected by the protoDUNE single-phase detector will amount to a few petabytes and the sustained rate of data sent to mass storage will be in the range of a few hundred megabytes per second. The protoDUNE experiment requires substantial Data Quality Monitoring capabilities in order to ascertain the condition of the detector and its various subsystems. To this end, a Prompt Processing system has been developed which is complementary to Online Monitoring and is characterized by a lower bandwidth, scalable CPU resources and end-to-end latency on the scale of a few minutes. We present the design of the protoDUNE Prompt Processing system, the current status of its development and testing and issues related to its interfaces and deployment

    Two-proton correlations in pp + Pb and S + Pb collisions at the CERN Super Proton Synchrotron

    No full text
    Two-proton correlations in 32^{32}S + 208^{208}Pb collisions at 200 GeV/c per nucleon and pp + 208^{208}Pb collisions at 450 GeV/c are presented, as measured by the focusing spectrometer of the NA44 experiment at CERN. The technique of intensity interferometry of pairs with low relative momentum has been used. A dependence of the correlation function on the event multiplicity is found. A detailed Monte Carlo calculation has been performed for the purpose of comparison of the data with the RQMD model, taking into account effects relating to the acceptance, momentum resolution and track reconstruction in the apparatus, as well as contamination of the proton sample by Λ\Lambda decays. The centrality of the events studied was estimated using data from a silicon multiplicity detector and a GEANT based simulation of that detector. We find that the RQMD predictions do not agree with the data. If the strange baryon contribution is estimated from the available experimental data rather than RQMD, an additional systematic uncertainty margin is introduced, making the discrepancy between Monte Carlo and the data less significant. The difference could be interpreted as an indication that the measured proton source, in S + Pb reactions, is somewhat larger than predicted by RQMD (this is less certain for pp + Pb). An alternative interpretation is that dynamical effects in RQMD are different from those in the actual source

    Development of noSQL data storage for the ATLAS PanDA Monitoring System

    No full text
    For several years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, typically exceeding 500k completed jobs/day by June 2011. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. These challenges are being met with a R&D effort aimed at implementing a scalable and efficient monitoring data storage based on a noSQL solution (Cassandra). We present our motivations for using this technology, as well as data design and the techniques used for efficient indexing of the data. We also discuss the hardware requirements as they were determined by testing with actual data and realistic loads

    The next generation of ATLAS PanDA Monitoring

    No full text
    For many years the PanDA Workload Management System has been the basis for distributed production and analysis for the ATLAS experiment at the LHC. Since the start of data taking PanDA usage has ramped up steadily, with up to 1M completed jobs/day in 2013. The associated monitoring data volume has been rising as well, to levels that present a new set of challenges in the areas of database scalability and monitoring system performance and efficiency. Outside of ATLAS, the PanDA system is also being used in projects like AMS, LSST and a few others. It currently undergoes a significant redesign, both of the core server components responsible for workload management, brokerage and data access, and of the monitoring part, which is critically important for efficient execution of the workflow in a way that’s transparent to the user and also provides an effective set of tools for operational support. The next generation of the PanDA Monitoring System is designed based on a proven, scalable, industry-standard Web Framework – Django. This allows us to achieve significant versatility and possibilities of customization, which is important to cover the needs of the growing community of PanDA users in a variety of science and technology areas. We describe the design principles of the core Web application, the UI layout of the presentation layer, and the challenges that must be met in order to continue the necessary support of the ATLAS experiment while expanding the scope of applications handled by PanDA
    corecore