46 research outputs found

    Key points detection algorithm for noised data

    No full text
    This works introduces a new algorithm for feature detection in noised data, independently from the dimension of the given data. The algorithm is based on the detection and isolation of large features and its operability is demonstrated in this thesis through the development of two techniques, based on it. The first uses the algorithm for features detection on images, using image stitching as metrics for comparison with existing techniques. It demonstrates excellent performances on tests datasets registering a success rate almost three times higher than existing techniques while being fast and presenting a unique characteristic in the amount of points it detects for homography, largely inferior in number but superior in quality when compared to other techniques. The second technique demonstrate the performances achievable by the algorithm for feature detection on time series, it was developed in the framework of the SmartLINAC project at CERN. The technique showed excellent performance, detecting consistently all areas of anomalies, and labelling them correctly, where existing techniques showed large amount of false positive and false negative labelling entries due to the noise present in the data. The algorithm’s core concept is to ignore ambient noise in the data by a series of pre-processing techniques involving normalization, smoothing and thresholding, using noise’s statistical distribution’s attribute. Large areas are then isolated by blocks which’s characteristics can be used for comparison. The two techniques showed excellent performance in their range of application, proving the algorithm proposed in the thesis relevant and performant in its domain of application

    CERN openlab Technical Workshop

    No full text

    Innovative Methodology Dedicated to the CERN LHC Cryogenic Valves Based on Modern Algorithm for Fault Detection and Predictive Diagnostics

    No full text
    The European Organization for Nuclear Research (CERN) cryogenic infrastructure is composed of many equipment, among them there are the cryogenic valves widely used in the Large Hadron Collider (LHC) cryogenic facility. At present time, diagnostic solutions that can be integrated into the process control systems, capable to identify leak failures in valves bellows, are not available. The authors goal has been the development of a system that allows the detection of helium leaking valves during normal operation using available data extracted from the control system. The design constraints has driven the development towards a solution integrated in the monitoring systems in use, not requiring manual interventions. The methodology presented in this article is based on the extraction of distinctive features (analyzing the data in time and frequency domain) which are exploited in the next phase of machine learning. The aim is to identify a list of candidate valves with a high probability of helium leakage. The proposed methodology, which is at very early stage now, with the evolution of the data set and the iterative approach is aiming toward a cryogenic valves targeted maintenance

    Extended anomaly detection and breakdown prediction in LINAC 4’s RF power source output

    No full text
    Linear accelerators are complex machines that can face significant periods of downtime due to anomalies and the subsequent failure of one or more components. The need for reliable linear accelerator operations (LINAC) is critical to the spread of this method in the medical environment. At CERN, where LINACs are used for fundamental research, similar problems are encountered, such as the appearance of jitter in plasma sources (2 MHz RF generators), which can have a significant effect on subsequent beam quality in the accelerator. The SmartLINAC project was created to increase LINACs’ reliability by early detection and prediction of anomalies in its operations, down to the component level. This article shows how anomalies were first discovered and goes deep into understanding the nature of the data. The research adds new elements to anomaly detection approaches used to record jitter periods on 2MHz RF generators

    Pandemic Drugs at Pandemic Speed: Infrastructure for Accelerating COVID-19 Drug Discovery with Hybrid Machine Learning- and Physics-based Simulations on High Performance Computers

    Get PDF
    The race to meet the challenges of the global pandemic has served as a reminder that the existing drug discovery process is expensive, inefficient and slow. There is a major bottleneck screening the vast number of potential small molecules to shortlist lead compounds for antiviral drug development. New opportunities to accelerate drug discovery lie at the interface between machine learning methods, in this case developed for linear accelerators, and physics-based methods. The two in silico methods, each have their own advantages and limitations which, interestingly, complement each other. Here, we present an innovative infrastructural development that combines both approaches to accelerate drug discovery. The scale of the potential resulting workflow is such that it is dependent on supercomputing to achieve extremely high throughput. We have demonstrated the viability of this workflow for the study of inhibitors for four COVID-19 target proteins and our ability to perform the required large-scale calculations to identify lead antiviral compounds through repurposing on a variety of supercomputers

    DUNE Offline Computing Conceptual Design Report

    No full text
    This document describes Offline Software and Computing for the Deep Underground Neutrino Experiment (DUNE) experiment, in particular, the conceptual design of the offline computing needed to accomplish its physics goals. Our emphasis in this document is the development of the computing infrastructure needed to acquire, catalog, reconstruct, simulate and analyze the data from the DUNE experiment and its prototypes. In this effort, we concentrate on developing the tools and systems thatfacilitate the development and deployment of advanced algorithms. Rather than prescribing particular algorithms, our goal is to provide resources that are flexible and accessible enough to support creative software solutions as HEP computing evolves and to provide computing that achieves the physics goals of the DUNE experiment

    Reconstruction of interactions in the ProtoDUNE-SP detector with Pandora

    No full text
    International audienceThe Pandora Software Development Kit and algorithm libraries provide pattern-recognition logic essential to the reconstruction of particle interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at ProtoDUNE-SP, a prototype for the Deep Underground Neutrino Experiment far detector. ProtoDUNE-SP, located at CERN, is exposed to a charged-particle test beam. This paper gives an overview of the Pandora reconstruction algorithms and how they have been tailored for use at ProtoDUNE-SP. In complex events with numerous cosmic-ray and beam background particles, the simulated reconstruction and identification efficiency for triggered test-beam particles is above 80% for the majority of particle type and beam momentum combinations. Specifically, simulated 1 GeV/cc charged pions and protons are correctly reconstructed and identified with efficiencies of 86.1±0.6\pm0.6% and 84.1±0.6\pm0.6%, respectively. The efficiencies measured for test-beam data are shown to be within 5% of those predicted by the simulation

    Reconstruction of interactions in the ProtoDUNE-SP detector with Pandora

    No full text
    International audienceThe Pandora Software Development Kit and algorithm libraries provide pattern-recognition logic essential to the reconstruction of particle interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at ProtoDUNE-SP, a prototype for the Deep Underground Neutrino Experiment far detector. ProtoDUNE-SP, located at CERN, is exposed to a charged-particle test beam. This paper gives an overview of the Pandora reconstruction algorithms and how they have been tailored for use at ProtoDUNE-SP. In complex events with numerous cosmic-ray and beam background particles, the simulated reconstruction and identification efficiency for triggered test-beam particles is above 80% for the majority of particle type and beam momentum combinations. Specifically, simulated 1 GeV/cc charged pions and protons are correctly reconstructed and identified with efficiencies of 86.1±0.6\pm0.6% and 84.1±0.6\pm0.6%, respectively. The efficiencies measured for test-beam data are shown to be within 5% of those predicted by the simulation