15 research outputs found

    Likelihoods for the LHC Searches

    No full text

    2013 CERN-Fermilab HCP Summer School

    No full text

    Data Science @ LHC 2015 Workshop

    No full text
    Most physics results at the LHC end in a likelihood ratio test. This includes discovery and exclusion for searches as well as mass, cross-section, and coupling measurements. The use of Machine Learning (multivariate) algorithms in HEP is mainly restricted to searches, which can be reduced to classification between two fixed distributions: signal vs. background. I will show how we can extend the use of ML classifiers to distributions parameterized by physical quantities like masses and couplings as well as nuisance parameters associated to systematic uncertainties. This allows for one to approximate the likelihood ratio while still using a high dimensional feature vector for the data. Both the MEM and ABC approaches mentioned above aim to provide inference on model parameters (like cross-sections, masses, couplings, etc.). ABC is fundamentally tied Bayesian inference and focuses on the “likelihood free” setting where only a simulator is available and one cannot directly compute the likelihood for the data. The MEM approach tries to directly compute the likelihood by approximating the detector. This approach is similar to ABC in that it provides parameter inference in the “likelihood free” setting by using a simulator, but it does not require one to use Bayesian inference and it cleanly separates issues of statistical calibration from the approximations that are being made. The method is much faster to evaluate than the MEM approach and does not require a simplified detector description. Furthermore, it is a generalization of the LHC experiments current use of multivariate classifiers for searches and integrates well into our existing statistical procedures

    6th General Meeting of the LHC EFT Working Group

    No full text
    Recently there has been rapid increase in the number of full statistical models (or "likelihoods") published by the experiments. Most are based on the HistFactory (pyhf) format and published in HEPData. This allows theorists and others to reproduce and combine measurements with the same gold standard as the internal experimental results. However, these are mainly from SUSY and exotics searches and working with EFTs is more complicated because quantum interference effects lead to changes in the signal template (via the dependence of the differential cross-sections and phase-space dependent selection efficiency on the EFT parameters). In this talk I will propose a simple, lightweight framework that would extend current likelihood publishing to overcome these challenges and enable 'exact' EFT fits (i.e. with the same level of detail as the internal experimental fits and combinations)

    QCD-Aware Neural Networks for Jet Physics

    No full text
    Recent progress in applying machine learning for jet physics has been built upon an analogy&nbsp;between calorimeters and images. In this work, we present a novel class of recursive neural&nbsp;networks built instead upon an analogy between QCD and natural languages. In the analogy,&nbsp;four-momenta are like words and the clustering history of sequential recombination jet&nbsp;algorithms is like the parsing of a sentence. Our approach works directly with the four-momenta&nbsp;of a variable-length set of particles, and the jet-based neural network topology varies on an event-by-event basis. Our experiments highlight the flexibility of our method for building task-specific jet&nbsp;embeddings and show that recursive architectures are significantly more accurate and data&nbsp;efficient than previous image-based networks. We extend the analogy from individual jets&nbsp;(sentences) to full events (paragraphs), and show for the first time an event-level classifier&nbsp;operating on all the stable particles produced in an LHC event. I will discuss future directions for this style of hybrid physics-aware machine learning algorithms.</p

    Efficient Search for New Physics using Active Learning in the ATLAS Experiment with RECAST

    No full text
    Searches for new physics and their reinterpretations constrain the parameter space of models with exclusion limits in typically no more than 2 dimensions. However, the relevant theory parameter space often extends into higher dimensions. Limited computing resources for signal process simulations impede the coverage of the full parameter space. We present an Active Learning approach to address this limitation. Compared to the usual grid sampling, it reduces the number of parameter space points for which exclusion limits need to be determined. Consequentially, it allows to extend interpretations of searches to higher dimensional parameter spaces and therefore to raise their value, e.g. via the identification of barely excluded subspaces which motivate dedicated new searches. The procedure is demonstrated by reinterpreting a Dark Matter search performed by the ATLAS experiment, extending its interpretation from a 2 to a 4-dimensional parameter space while keeping the computational effort at a low level

    Efficient search for new physics using Active Learning in the ATLAS Experiment

    No full text
    Searches for new physics at the LHC set exclusion limits in multi-dimensional parameter spaces of various theories. Typically, these are presented as 1- or 2-dimensional parameter scans; however, the relevant theory's parameter space is usually of a higher dimension. As a result, only a subspace is covered, which is due to the computing time requirements of simulations for the signal process. An Active Learning approach is presented to address this limitation. Compared to the usual grid scan, it reduces the number of points in parameter space for which exclusion limits need to be determined. Hence it enables richer interpretations of searches in higher-dimensional parameter spaces, which increases the value of the search. For example, this may reveal regions of parameter space that are not excluded and motivate new, dedicated searches. Our Active Learning approach is an iterative procedure. First, a Gaussian Process is fit to exclude signal cross-sections. Within the region close to the exclusion contour predicted by the Gaussian Process, Poisson disc sampling is used to sample additional points in parameter space for which the cross-section limits should be evaluated. The procedure is aided by a warm-start phase based on computationally inexpensive, approximate limit estimates. A python package, excursion, provides the Gaussian Process routine. The procedure is applied to a dark matter search performed by the ATLAS experiment, extending its interpretation from a 2 to a 4-dimensional parameter space while keeping the computational effort at a low level
    corecore