73,997 research outputs found

    The Joint COntrols Project Framework

    Full text link
    The Framework is one of the subprojects of the Joint COntrols Project (JCOP), which is collaboration between the four LHC experiments and CERN. By sharing development, this will reduce the overall effort required to build and maintain the experiment control systems. As such, the main aim of the Framework is to deliver a common set of software components, tools and guidelines that can be used by the four LHC experiments to build their control systems. Although commercial components are used wherever possible, further added value is obtained by customisation for HEP-specific applications. The supervisory layer of the Framework is based on the SCADA tool PVSS, which was selected after a detailed evaluation. This is integrated with the front-end layer via both OPC (OLE for Process Control), an industrial standard, and the CERN-developed DIM (Distributed Information Management System) protocol. Several components are already in production and being used by running fixed-target experiments at CERN as well as for the LHC experiment test beams. The paper will give an overview of the key concepts behind the project as well as the state of the current development and future plans.Comment: Paper from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 4 pages, PDF. PSN THGT00

    Gravity Spy: Integrating Advanced LIGO Detector Characterization, Machine Learning, and Citizen Science

    Get PDF
    (abridged for arXiv) With the first direct detection of gravitational waves, the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) has initiated a new field of astronomy by providing an alternate means of sensing the universe. The extreme sensitivity required to make such detections is achieved through exquisite isolation of all sensitive components of LIGO from non-gravitational-wave disturbances. Nonetheless, LIGO is still susceptible to a variety of instrumental and environmental sources of noise that contaminate the data. Of particular concern are noise features known as glitches, which are transient and non-Gaussian in their nature, and occur at a high enough rate so that accidental coincidence between the two LIGO detectors is non-negligible. In this paper we describe an innovative project that combines crowdsourcing with machine learning to aid in the challenging task of categorizing all of the glitches recorded by the LIGO detectors. Through the Zooniverse platform, we engage and recruit volunteers from the public to categorize images of glitches into pre-identified morphological classes and to discover new classes that appear as the detectors evolve. In addition, machine learning algorithms are used to categorize images after being trained on human-classified examples of the morphological classes. Leveraging the strengths of both classification methods, we create a combined method with the aim of improving the efficiency and accuracy of each individual classifier. The resulting classification and characterization should help LIGO scientists to identify causes of glitches and subsequently eliminate them from the data or the detector entirely, thereby improving the rate and accuracy of gravitational-wave observations. We demonstrate these methods using a small subset of data from LIGO's first observing run.Comment: 27 pages, 8 figures, 1 tabl

    Prototyping Virtual Data Technologies in ATLAS Data Challenge 1 Production

    Full text link
    For efficiency of the large production tasks distributed worldwide, it is essential to provide shared production management tools comprised of integratable and interoperable services. To enhance the ATLAS DC1 production toolkit, we introduced and tested a Virtual Data services component. For each major data transformation step identified in the ATLAS data processing pipeline (event generation, detector simulation, background pile-up and digitization, etc) the Virtual Data Cookbook (VDC) catalogue encapsulates the specific data transformation knowledge and the validated parameters settings that must be provided before the data transformation invocation. To provide for local-remote transparency during DC1 production, the VDC database server delivered in a controlled way both the validated production parameters and the templated production recipes for thousands of the event generation and detector simulation jobs around the world, simplifying the production management solutions.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 5 pages, 3 figures, pdf. PSN TUCP01

    Off-line computing for experimental high-energy physics

    Get PDF
    The needs of experimental high-energy physics for large-scale computing and data handling are explained in terms of the complexity of individual collisions and the need for high statistics to study quantum mechanical processes. The prevalence of university-dominated collaborations adds a requirement for high-performance wide-area networks. The data handling and computational needs of the different types of large experiment, now running or under construction, are evaluated. Software for experimental high-energy physics is reviewed briefly with particular attention to the success of packages written within the discipline. It is argued that workstations and graphics are important in ensuring that analysis codes are correct, and the worldwide networks which support the involvement of remote physicists are described. Computing and data handling are reviewed showing how workstations and RISC processors are rising in importance but have not supplanted traditional mainframe processing. Examples of computing systems constructed within high-energy physics are examined and evaluated

    First results from the LUCID-Timepix spacecraft payload onboard the TechDemoSat-1 satellite in Low Earth Orbit

    Full text link
    The Langton Ultimate Cosmic ray Intensity Detector (LUCID) is a payload onboard the satellite TechDemoSat-1, used to study the radiation environment in Low Earth Orbit (\sim635km). LUCID operated from 2014 to 2017, collecting over 2.1 million frames of radiation data from its five Timepix detectors on board. LUCID is one of the first uses of the Timepix detector technology in open space, with the data providing useful insight into the performance of this technology in new environments. It provides high-sensitivity imaging measurements of the mixed radiation field, with a wide dynamic range in terms of spectral response, particle type and direction. The data has been analysed using computing resources provided by GridPP, with a new machine learning algorithm that uses the Tensorflow framework. This algorithm provides a new approach to processing Medipix data, using a training set of human labelled tracks, providing greater particle classification accuracy than other algorithms. For managing the LUCID data, we have developed an online platform called Timepix Analysis Platform at School (TAPAS). This provides a swift and simple way for users to analyse data that they collect using Timepix detectors from both LUCID and other experiments. We also present some possible future uses of the LUCID data and Medipix detectors in space.Comment: Accepted for publication in Advances in Space Researc

    Technology for the Future: In-Space Technology Experiments Program, part 2

    Get PDF
    The purpose of the Office of Aeronautics and Space Technology (OAST) In-Space Technology Experiments Program In-STEP 1988 Workshop was to identify and prioritize technologies that are critical for future national space programs and require validation in the space environment, and review current NASA (In-Reach) and industry/ university (Out-Reach) experiments. A prioritized list of the critical technology needs was developed for the following eight disciplines: structures; environmental effects; power systems and thermal management; fluid management and propulsion systems; automation and robotics; sensors and information systems; in-space systems; and humans in space. This is part two of two parts and contains the critical technology presentations for the eight theme elements and a summary listing of critical space technology needs for each theme
    corecore