30 research outputs found

    Nanosecond machine learning event classification with boosted decision trees in FPGA for high energy physics

    Full text link
    We present a novel implementation of classification using the machine learning / artificial intelligence method called boosted decision trees (BDT) on field programmable gate arrays (FPGA). The firmware implementation of binary classification requiring 100 training trees with a maximum depth of 4 using four input variables gives a latency value of about 10 ns, independent of the clock speed from 100 to 320 MHz in our setup. The low timing values are achieved by restructuring the BDT layout and reconfiguring its parameters. The FPGA resource utilization is also kept low at a range from 0.01% to 0.2% in our setup. A software package called fwXmachina achieves this implementation. Our intended user is an expert of custom electronics-based trigger systems in high energy physics experiments or anyone that needs decisions at the lowest latency values for real-time event classification. Two problems from high energy physics are considered, in the separation of electrons vs. photons and in the selection of vector boson fusion-produced Higgs bosons vs. the rejection of the multijet processes.Comment: 66 pages, 27 figures, 13 tables, JINST versio

    Nanosecond anomaly detection with decision trees for high energy physics and real-time application to exotic Higgs decays

    Full text link
    We present a novel implementation of the artificial intelligence autoencoding algorithm, used as an ultrafast and ultraefficient anomaly detector, built with a forest of deep decision trees on FPGA, field programmable gate arrays. Scenarios at the Large Hadron Collider at CERN are considered, for which the autoencoder is trained using known physical processes of the Standard Model. The design is then deployed in real-time trigger systems for anomaly detection of new unknown physical processes, such as the detection of exotic Higgs decays, on events that fail conventional threshold-based algorithms. The inference is made within a latency value of 25 ns, the time between successive collisions at the Large Hadron Collider, at percent-level resource usage. Our method offers anomaly detection at the lowest latency values for edge AI users with tight resource constraints.Comment: 26 pages, 9 figures, 1 tabl

    The ATLAS Trigger Simulation with Legacy Software

    No full text
    Physics analyses at the LHC require accurate simulations of the detector response and the event selection processes, generally done with the most recent software releases. The trigger response simulation is crucial for determination of overall selection efficiencies and signal sensitivities and should be done with the same software release with which data were recorded. This requires potentially running with software dating many years back, the so-called legacy software. Therefore having a strategy for running legacy software in a modern environment becomes essential when data simulated for past years start to present a sizeable fraction of the total. The requirements and possibilities for such a simulation scheme within the ATLAS software framework were examined and a proof-of-concept simulation chain has been successfully implemented. One of the greatest challenges was the choice of a data format which promises long term compatibility with old and new software releases. Over the time periods envisaged, data format incompatibilities are also likely to emerge in databases and other external support services. Software availability may become an issue, when e.g. the support for the underlying operating system might stop. The encountered problems and developed solutions will be presented, and proposals for future development will be discussed. Some ideas reach beyond the retrospective trigger simulation scheme in ATLAS as they also touch more generally aspects of data preservation

    Revisiting the Global Electroweak Fit of the Standard Model and Beyond with Gfitter

    No full text
    The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package, Gfitter, allowing flexible and efficient model testing in high-energy physics, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. Theoretical uncertainties are explicitly included in the fit through scale parameters varying within given error ranges. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M_H=(116.4 +18.3 -1.3) G eV, and the 2sigma and 3sigma allowed regions [114,145] GeV and [[113,168] and [180,225]] GeV, respectively. For the strong coupling strength at fourth perturbative order we obtain alpha_S(M_Z)=0.1193 +0.0028 -0.0027(exp) +- 0.0001(theo). Finally, for the mass of the top quark, excluding the direct measurements, we find m_t=(178.2 +9.8 -4.2) GeV. In the 2HDM we exclude a charged-Higgs mass below 240 GeV at 95% confidence level. This limit increases towards larger tan(beta), where e.g., M_H+-&amp;amp;lt;780 GeV is excluded for tan(beta)=70.The global fit of the Standard Model to electroweak precision data, routinely performed by the LEP electroweak working group and others, demonstrated impressively the predictive power of electroweak unification and quantum loop corrections. We have revisited this fit in view of (i) the development of the new generic fitting package Gfitter, (ii) the insertion of constraints from direct Higgs searches at LEP and the Tevatron, and (iii) a more thorough statistical interpretation of the results. Gfitter is a modular fitting toolkit, which features predictive theoretical models as independent plugins, and a statistical analysis of the fit results using toy Monte Carlo techniques. The state-of-the-art electroweak Standard Model is fully implemented, as well as generic extensions to it. This paper introduces the Gfitter project, and presents state-of-the-art results for the global electroweak fit in the Standard Model, and for a model with an extended Higgs sector (2HDM). Numerical and graphical results for fits with and without including the constraints from the direct Higgs searches at LEP and Tevatron are given. Perspectives for future colliders are analysed and discussed. Including the direct Higgs searches, we find M_H=(116.4 +18.3 -1.3) GeV, and the 2sigma and 3sigma allowed regions [114,145] GeV and [[113,168] and [180,225]] GeV, respectively. For the strong coupling strength at fourth perturbative order we obtain alpha_S(M_Z)=0.1193 +0.0028 -0.0027(exp) +- 0.0001(theo). Finally, for the mass of the top quark, excluding the direct measurements, we find m_t=(178.2 +9.8 -4.2) GeV. In the 2HDM we exclude a charged-Higgs mass below 240 GeV at 95% confidence level. This limit increases towards larger tan(beta), where e.g., M_H+-<780 GeV is excluded for tan(beta)=70

    Comparison of nine different selective agars for the detection of carbapenemase-producing Enterobacterales (CPE)

    No full text
    The rapid identification of patients colonized with carbapenem-resistant Enterobacterales (CRE) is important for infection control purposes. Here, we compared and evaluated nine different agars for the detection of carbapenemase-producing Enterobacterales (CPE) from clinical samples. In the study, 69 CPE and 40 carbapenemase-negative isolates were included. Overall, seven commercially available screening agars were assessed: Brilliance CRE (Oxoid), Chromatic CRE (Liofilchem), chromID CARBA and chromID OXA-48 (both bioMerieux), three ESBL agars (Chromatic ESBL [Liofilchem], chromID ESBL [bioMerieux], Brilliance ESBL [Oxoid]), and two agars produced in-house (McCARB and McCARB-T). The sensitivity of CRE agars for CPE detection ranged from 34.8 to 98.6%. Brilliance CRE and McCARB/McCARB-T showed the overall highest sensitivity (98.6 and 97.1%, respectively). OXA-48 producers were the most difficult to detect; only 4/9 agars detected all isolates (McCARB/McCARB-T, Chromatic CRE, ChromID OXA-48). Additionally, all ESBL-negative OXA-48 isolates failed to grow on ESBL screening agars. Specificity ranged from 30 (Brilliance ESBL) to 100% (ChromID OXA-48). The limit of detection for different CPE in spiked stool samples ranged from 1.5 x 10(1) to 1.5 x 10(3) CFU/ml. Overall, Brilliance CRE and the McCARB in-house agars showed the best performance and were able to detect most CPE, including almost all OXA-48. ESBL agars were not suitable for detection of CPE alone, as OXA-48 isolates negative for ESBL were suppressed. The highest sensitivity was achieved by a combination of a CRE agar and an ESBL agar

    The Database Driven ATLAS Trigger Configuration System

    No full text
    The ATLAS trigger configuration system uses a centrally provided relational database to store the configurations for all levels of the ATLAS trigger system. The configurations used at any point in data taking is also maintained. A graphical user interface to this database is provided by the TriggerTool, a Java-based graphical interface. The TriggerTool has been designed to work as both a convenient browser and editor of the configurations in the database for both general users and experts. The updates to this system necessitated by the upgrades and changes in both hardware and software during the first long shut down of the LHC will be explored
    corecore