27 research outputs found

    CERN openlab Technical Workshop

    No full text

    Key points detection algorithm for noised data

    No full text
    This works introduces a new algorithm for feature detection in noised data, independently from the dimension of the given data. The algorithm is based on the detection and isolation of large features and its operability is demonstrated in this thesis through the development of two techniques, based on it. The first uses the algorithm for features detection on images, using image stitching as metrics for comparison with existing techniques. It demonstrates excellent performances on tests datasets registering a success rate almost three times higher than existing techniques while being fast and presenting a unique characteristic in the amount of points it detects for homography, largely inferior in number but superior in quality when compared to other techniques. The second technique demonstrate the performances achievable by the algorithm for feature detection on time series, it was developed in the framework of the SmartLINAC project at CERN. The technique showed excellent performance, detecting consistently all areas of anomalies, and labelling them correctly, where existing techniques showed large amount of false positive and false negative labelling entries due to the noise present in the data. The algorithm’s core concept is to ignore ambient noise in the data by a series of pre-processing techniques involving normalization, smoothing and thresholding, using noise’s statistical distribution’s attribute. Large areas are then isolated by blocks which’s characteristics can be used for comparison. The two techniques showed excellent performance in their range of application, proving the algorithm proposed in the thesis relevant and performant in its domain of application

    Innovative Methodology Dedicated to the CERN LHC Cryogenic Valves Based on Modern Algorithm for Fault Detection and Predictive Diagnostics

    No full text
    The European Organization for Nuclear Research (CERN) cryogenic infrastructure is composed of many equipment, among them there are the cryogenic valves widely used in the Large Hadron Collider (LHC) cryogenic facility. At present time, diagnostic solutions that can be integrated into the process control systems, capable to identify leak failures in valves bellows, are not available. The authors goal has been the development of a system that allows the detection of helium leaking valves during normal operation using available data extracted from the control system. The design constraints has driven the development towards a solution integrated in the monitoring systems in use, not requiring manual interventions. The methodology presented in this article is based on the extraction of distinctive features (analyzing the data in time and frequency domain) which are exploited in the next phase of machine learning. The aim is to identify a list of candidate valves with a high probability of helium leakage. The proposed methodology, which is at very early stage now, with the evolution of the data set and the iterative approach is aiming toward a cryogenic valves targeted maintenance

    Extended anomaly detection and breakdown prediction in LINAC 4’s RF power source output

    No full text
    Linear accelerators are complex machines that can face significant periods of downtime due to anomalies and the subsequent failure of one or more components. The need for reliable linear accelerator operations (LINAC) is critical to the spread of this method in the medical environment. At CERN, where LINACs are used for fundamental research, similar problems are encountered, such as the appearance of jitter in plasma sources (2 MHz RF generators), which can have a significant effect on subsequent beam quality in the accelerator. The SmartLINAC project was created to increase LINACs’ reliability by early detection and prediction of anomalies in its operations, down to the component level. This article shows how anomalies were first discovered and goes deep into understanding the nature of the data. The research adds new elements to anomaly detection approaches used to record jitter periods on 2MHz RF generators

    Pandemic Drugs at Pandemic Speed: Infrastructure for Accelerating COVID-19 Drug Discovery with Hybrid Machine Learning- and Physics-based Simulations on High Performance Computers

    Get PDF
    The race to meet the challenges of the global pandemic has served as a reminder that the existing drug discovery process is expensive, inefficient and slow. There is a major bottleneck screening the vast number of potential small molecules to shortlist lead compounds for antiviral drug development. New opportunities to accelerate drug discovery lie at the interface between machine learning methods, in this case developed for linear accelerators, and physics-based methods. The two in silico methods, each have their own advantages and limitations which, interestingly, complement each other. Here, we present an innovative infrastructural development that combines both approaches to accelerate drug discovery. The scale of the potential resulting workflow is such that it is dependent on supercomputing to achieve extremely high throughput. We have demonstrated the viability of this workflow for the study of inhibitors for four COVID-19 target proteins and our ability to perform the required large-scale calculations to identify lead antiviral compounds through repurposing on a variety of supercomputers

    Reconstruction of interactions in the ProtoDUNE-SP detector with Pandora

    No full text
    International audienceThe Pandora Software Development Kit and algorithm libraries provide pattern-recognition logic essential to the reconstruction of particle interactions in liquid argon time projection chamber detectors. Pandora is the primary event reconstruction software used at ProtoDUNE-SP, a prototype for the Deep Underground Neutrino Experiment far detector. ProtoDUNE-SP, located at CERN, is exposed to a charged-particle test beam. This paper gives an overview of the Pandora reconstruction algorithms and how they have been tailored for use at ProtoDUNE-SP. In complex events with numerous cosmic-ray and beam background particles, the simulated reconstruction and identification efficiency for triggered test-beam particles is above 80% for the majority of particle type and beam momentum combinations. Specifically, simulated 1 GeV/cc charged pions and protons are correctly reconstructed and identified with efficiencies of 86.1±0.6\pm0.6% and 84.1±0.6\pm0.6%, respectively. The efficiencies measured for test-beam data are shown to be within 5% of those predicted by the simulation

    Separation of track- and shower-like energy deposits in ProtoDUNE-SP using a convolutional neural network

    No full text
    International audienceLiquid argon time projection chamber detector technology provides high spatial and calorimetric resolutions on the charged particles traversing liquid argon. As a result, the technology has been used in a number of recent neutrino experiments, and is the technology of choice for the Deep Underground Neutrino Experiment (DUNE). In order to perform high precision measurements of neutrinos in the detector, final state particles need to be effectively identified, and their energy accurately reconstructed. This article proposes an algorithm based on a convolutional neural network to perform the classification of energy deposits and reconstructed particles as track-like or arising from electromagnetic cascades. Results from testing the algorithm on experimental data from ProtoDUNE-SP, a prototype of the DUNE far detector, are presented. The network identifies track- and shower-like particles, as well as Michel electrons, with high efficiency. The performance of the algorithm is consistent between experimental data and simulation

    Highly-parallelized simulation of a pixelated LArTPC on a GPU

    No full text
    The rapid development of general-purpose computing on graphics processing units (GPGPU) is allowing the implementation of highly-parallelized Monte Carlo simulation chains for particle physics experiments. This technique is particularly suitable for the simulation of a pixelated charge readout for time projection chambers, given the large number of channels that this technology employs. Here we present the first implementation of a full microphysical simulator of a liquid argon time projection chamber (LArTPC) equipped with light readout and pixelated charge readout, developed for the DUNE Near Detector. The software is implemented with an end-to-end set of GPU-optimized algorithms. The algorithms have been written in Python and translated into CUDA kernels using Numba, a just-in-time compiler for a subset of Python and NumPy instructions. The GPU implementation achieves a speed up of four orders of magnitude compared with the equivalent CPU version. The simulation of the current induced on 10310^3 pixels takes around 1 ms on the GPU, compared with approximately 10 s on the CPU. The results of the simulation are compared against data from a pixel-readout LArTPC prototype