523 research outputs found

    Knowledge Distillation for Anomaly Detection

    Full text link
    Unsupervised deep learning techniques are widely used to identify anomalous behaviour. The performance of such methods is a product of the amount of training data and the model size. However, the size is often a limiting factor for the deployment on resource-constrained devices. We present a novel procedure based on knowledge distillation for compressing an unsupervised anomaly detection model into a supervised deployable one and we suggest a set of techniques to improve the detection sensitivity. Compressed models perform comparably to their larger counterparts while significantly reducing the size and memory footprint

    A machine-learning pipeline for real-time detection of gravitational waves from compact binary coalescences

    Full text link
    The promise of multi-messenger astronomy relies on the rapid detection of gravitational waves at very low latencies (O\mathcal{O}(1\,s)) in order to maximize the amount of time available for follow-up observations. In recent years, neural-networks have demonstrated robust non-linear modeling capabilities and millisecond-scale inference at a comparatively small computational footprint, making them an attractive family of algorithms in this context. However, integration of these algorithms into the gravitational-wave astrophysics research ecosystem has proven non-trivial. Here, we present the first fully machine learning-based pipeline for the detection of gravitational waves from compact binary coalescences (CBCs) running in low-latency. We demonstrate this pipeline to have a fraction of the latency of traditional matched filtering search pipelines while achieving state-of-the-art sensitivity to higher-mass stellar binary black holes

    Applications and Techniques for Fast Machine Learning in Science

    Get PDF
    In this community review report, we discuss applications and techniques for fast machine learning (ML) in science - the concept of integrating powerful ML methods into the real-time experimental data processing loop to accelerate scientific discovery. The material for the report builds on two workshops held by the Fast ML for Science community and covers three main areas: applications for fast ML across a number of scientific domains; techniques for training and implementing performant and resource-efficient ML algorithms; and computing architectures, platforms, and technologies for deploying these algorithms. We also present overlapping challenges across the multiple scientific domains where common solutions can be found. This community report is intended to give plenty of examples and inspiration for scientific discovery through integrated and accelerated ML solutions. This is followed by a high-level overview and organization of technical advances, including an abundance of pointers to source material, which can enable these breakthroughs

    LHCb upgrade software and computing : technical design report

    Get PDF
    This document reports the Research and Development activities that are carried out in the software and computing domains in view of the upgrade of the LHCb experiment. The implementation of a full software trigger implies major changes in the core software framework, in the event data model, and in the reconstruction algorithms. The increase of the data volumes for both real and simulated datasets requires a corresponding scaling of the distributed computing infrastructure. An implementation plan in both domains is presented, together with a risk assessment analysis

    Physics case for an LHCb Upgrade II - Opportunities in flavour physics, and beyond, in the HL-LHC era

    Get PDF
    The LHCb Upgrade II will fully exploit the flavour-physics opportunities of the HL-LHC, and study additional physics topics that take advantage of the forward acceptance of the LHCb spectrometer. The LHCb Upgrade I will begin operation in 2020. Consolidation will occur, and modest enhancements of the Upgrade I detector will be installed, in Long Shutdown 3 of the LHC (2025) and these are discussed here. The main Upgrade II detector will be installed in long shutdown 4 of the LHC (2030) and will build on the strengths of the current LHCb experiment and the Upgrade I. It will operate at a luminosity up to 2×1034 cm−2s−1, ten times that of the Upgrade I detector. New detector components will improve the intrinsic performance of the experiment in certain key areas. An Expression Of Interest proposing Upgrade II was submitted in February 2017. The physics case for the Upgrade II is presented here in more depth. CP-violating phases will be measured with precisions unattainable at any other envisaged facility. The experiment will probe b → sl+l−and b → dl+l− transitions in both muon and electron decays in modes not accessible at Upgrade I. Minimal flavour violation will be tested with a precision measurement of the ratio of B(B0 → μ+μ−)/B(Bs → μ+μ−). Probing charm CP violation at the 10−5 level may result in its long sought discovery. Major advances in hadron spectroscopy will be possible, which will be powerful probes of low energy QCD. Upgrade II potentially will have the highest sensitivity of all the LHC experiments on the Higgs to charm-quark couplings. Generically, the new physics mass scale probed, for fixed couplings, will almost double compared with the pre-HL-LHC era; this extended reach for flavour physics is similar to that which would be achieved by the HE-LHC proposal for the energy frontier

    Measurement of the B0s→μ+μ− Branching Fraction and Effective Lifetime and Search for B0→μ+μ− Decays

    Get PDF
    A search for the rare decays Bs0→μ+μ- and B0→μ+μ- is performed at the LHCb experiment using data collected in pp collisions corresponding to a total integrated luminosity of 4.4  fb-1. An excess of Bs0→μ+μ- decays is observed with a significance of 7.8 standard deviations, representing the first observation of this decay in a single experiment. The branching fraction is measured to be B(Bs0→μ+μ-)=(3.0±0.6-0.2+0.3)×10-9, where the first uncertainty is statistical and the second systematic. The first measurement of the Bs0→μ+μ- effective lifetime, τ(Bs0→μ+μ-)=2.04±0.44±0.05  ps, is reported. No significant excess of B0→μ+μ- decays is found, and a 95% confidence level upper limit, B(B0→μ+μ-)<3.4×10-10, is determined. All results are in agreement with the standard model expectations.A search for the rare decays Bs0μ+μB^0_s\to\mu^+\mu^- and B0μ+μB^0\to\mu^+\mu^- is performed at the LHCb experiment using data collected in pppp collisions corresponding to a total integrated luminosity of 4.4 fb1^{-1}. An excess of Bs0μ+μB^0_s\to\mu^+\mu^- decays is observed with a significance of 7.8 standard deviations, representing the first observation of this decay in a single experiment. The branching fraction is measured to be B(Bs0μ+μ)=(3.0±0.60.2+0.3)×109{\cal B}(B^0_s\to\mu^+\mu^-)=\left(3.0\pm 0.6^{+0.3}_{-0.2}\right)\times 10^{-9}, where the first uncertainty is statistical and the second systematic. The first measurement of the Bs0μ+μB^0_s\to\mu^+\mu^- effective lifetime, τ(Bs0μ+μ)=2.04±0.44±0.05\tau(B^0_s\to\mu^+\mu^-)=2.04\pm 0.44\pm 0.05 ps, is reported. No significant excess of B0μ+μB^0\to\mu^+\mu^- decays is found and a 95 % confidence level upper limit, B(B0μ+μ)<3.4×1010{\cal B}(B^0\to\mu^+\mu^-)<3.4\times 10^{-10}, is determined. All results are in agreement with the Standard Model expectations

    Study of π0/γ\pi^0/\gamma efficiency using BB meson decays in the LHCb experiment

    No full text
    The reconstruction efficiency of photons and neutral pions is measured using the relative yields of reconstructed B+J/ψK+(K+π0)B^+\to J/\psi K^{*+} (\to K^+\pi^0) and B+J/ψK+B^+\to J/\psi K^+decays. The efficiency is studied using the data set, corresponding to an integrated luminosity of 3 fb13~\mathrm{fb}^{-1}, collected by the LHCb experiment in proton-proton collisions at the centre-of-mass energies of 7 and 8 TeVTeV

    LHC physics dataset for unsupervised New Physics detection at 40 MHz

    No full text
    In the particle detectors at the Large Hadron Collider, hundreds of millions of proton-proton collisions are produced every second. If one could store the whole data stream produced in these collisions, tens of terabytes of data would be written to disk every second. The general-purpose experiments ATLAS and CMS reduce this overwhelming data volume to a sustainable level, by deciding in real-time whether each collision event should be kept for further analysis or be discarded. We introduce a dataset of proton collision events that emulates a typical data stream collected by such a real-time processing system, pre-filtered by requiring the presence of at least one electron or muon. This dataset could be used to develop novel event selection strategies and assess their sensitivity to new phenomena. In particular, we intend to stimulate a community-based effort towards the design of novel algorithms for performing unsupervised new physics detection, customized to fit the bandwidth, latency and computational resource constraints of the real-time event selection system of a typical particle detector.ISSN:2052-446
    corecore