46 research outputs found

    Spey: Smooth inference for reinterpretation studies

    Get PDF
    Statistical models serve as the cornerstone for hypothesis testing in empirical studies. This paper introduces a new cross-platform Python-based package designed to utilize different likelihood prescriptions via a flexible plug-in system. This framework empowers users to propose, examine, and publish new likelihood prescriptions without developing software infrastructure, ultimately unifying and generalising different ways of constructing likelihoods and employing them for hypothesis testing within a unified platform. We propose a new simplified likelihood prescription, surpassing previous approximation accuracies by incorporating asymmetric uncertainties. Moreover, our package facilitates the integration of various likelihood combination routines, thereby broadening the scope of independent studies through a meta-analysis. By remaining agnostic to the source of the likelihood prescription and the signal hypothesis generator, our platform allows for the seamless implementation of packages with different likelihood prescriptions, fostering compatibility and interoperabilit

    Spey: smooth inference for reinterpretation studies

    Get PDF
    Analysing statistical models is at the heart of any empirical study for hypothesis testing. We present a new cross-platform Python-based package which employs different likelihood prescriptions through a plug-in system, enabling the statistical inference of hypotheses. This framework empowers users to propose, examine, and publish new likelihood prescriptions without the need for developing a new inference system. Within this package, we propose a new simplified likelihood prescription which surpasses the approximation accuracy of its predecessors by incorporating asymmetric uncertainties. Furthermore, our package facilitates the integration of various likelihood combination routines, thereby broadening the scope of independent studies through a meta-analysis. By remaining agnostic to the source of the likelihood prescription and the signal hypothesis generator, our platform allows for the seamless implementation of packages with different likelihood prescriptions, fostering compatibility and interoperability.Comment: 29 pages, 8 figure

    Differentiating U(1)U(1)^\prime supersymmetric models with right sneutrino and neutralino dark matter

    Full text link
    We perform a detailed analysis of dark matter signals of supersymmetric models containing an extra U(1)U(1)^\prime gauge group. We investigate scenarios in which either the right sneutrino or the lightest neutralino are phenomenologically acceptable dark matter candidates and we explore the parameter spaces of different supersymmetric realisations featuring an extra U(1)U(1)^\prime. We impose consistency with low energy observables, with known mass limits for the superpartners and ZZ^\prime bosons, as well as with Higgs boson signal strengths, and we moreover verify that predictions for the anomalous magnetic moment of the muon agree with the experimental value and require that the dark matter candidate satisfies the observed relic density and direct and indirect dark matter detection constraints. For the case where the sneutrino is the dark matter candidate, we find distinguishing characteristics among different U(1)U(1)^\prime mixing angles. If the neutralino is the lightest supersymmetric particle, its mass is heavier than that of the light sneutrino in scenarios where the latter is a dark matter candidate, the parameter space is less restricted and differentiation between models is more difficult. We finally comment on the possible collider tests of these models.Comment: 21 pages, 11 figures, version accepted by PR

    Loopholes in ZZ^\prime searches at the LHC: exploring supersymmetric and leptophobic scenarios

    Get PDF
    Searching for heavy vector bosons ZZ^\prime, predicted in models inspired by Grand Unification Theories, is among the challenging objectives of the LHC. The ATLAS and CMS collaborations have looked for ZZ^\prime bosons assuming that they can decay only into Standard Model channels, and have set exclusion limits by investigating dilepton, dijet and to a smaller extent top-antitop final states. In this work we explore possible loopholes in these ZZ^\prime searches by studying supersymmetric as well as leptophobic scenarios. We demonstrate the existence of realizations in which the ZZ^\prime boson automatically evades the typical bounds derived from the analyses of the Drell-Yan invariant-mass spectrum. Dileptonic final states can in contrast only originate from supersymmetric ZZ^\prime decays and are thus accompanied by additional effects. This feature is analyzed in the context of judiciously chosen benchmark configurations, for which visible signals could be expected in future LHC data with a 4σ7σ4\sigma-7\sigma significance. Our results should hence motivate an extension of the current ZZ^\prime search program to account for supersymmetric and leptophobic models.Comment: 32 pages, 15 figures. After JHEP revision. Published on 15 February 201

    Quantum-probabilistic Hamiltonian learning for generative modelling & anomaly detection

    Get PDF
    The Hamiltonian of an isolated quantum mechanical system determines its dynamics and physical behaviour. This study investigates the possibility of learning and utilising a system's Hamiltonian and its variational thermal state estimation for data analysis techniques. For this purpose, we employ the method of Quantum Hamiltonian-Based Models for the generative modelling of simulated Large Hadron Collider data and demonstrate the representability of such data as a mixed state. In a further step, we use the learned Hamiltonian for anomaly detection, showing that different sample types can form distinct dynamical behaviours once treated as a quantum many-body system. We exploit these characteristics to quantify the difference between sample types. Our findings show that the methodologies designed for field theory computations can be utilised in machine learning applications to employ theoretical approaches in data analysis techniques.Comment: 10 pages, 4 figures. Comments are welcome

    Classical versus Quantum: comparing Tensor Network-based Quantum Circuits on LHC data

    Full text link
    Tensor Networks (TN) are approximations of high-dimensional tensors designed to represent locally entangled quantum many-body systems efficiently. This study provides a comprehensive comparison between classical TNs and TN-inspired quantum circuits in the context of Machine Learning on highly complex, simulated LHC data. We show that classical TNs require exponentially large bond dimensions and higher Hilbert-space mapping to perform comparably to their quantum counterparts. While such an expansion in the dimensionality allows better performance, we observe that, with increased dimensionality, classical TNs lead to a highly flat loss landscape, rendering the usage of gradient-based optimization methods highly challenging. Furthermore, by employing quantitative metrics, such as the Fisher information and effective dimensions, we show that classical TNs require a more extensive training sample to represent the data as efficiently as TN-inspired quantum circuits. We also engage with the idea of hybrid classical-quantum TNs and show possible architectures to employ a larger phase-space from the data. We offer our results using three main TN ansatz: Tree Tensor Networks, Matrix Product States, and Multi-scale Entanglement Renormalisation Ansatz.Comment: 18 pages, 15 figures, 1 table. Accepted version for publication in PR

    Quantum-probabilistic Hamiltonian learning for generative modeling and anomaly detection

    Get PDF
    The Hamiltonian of an isolated quantum-mechanical system determines its dynamics and physical behavior. This study investigates the possibility of learning and utilizing a system's Hamiltonian and its variational thermal state estimation for data analysis techniques. For this purpose, we employ the method of quantum Hamiltonian-based models for the generative modeling of simulated Large Hadron Collider data and demonstrate the representability of such data as a mixed state. In a further step, we use the learned Hamiltonian for anomaly detection, showing that different sample types can form distinct dynamical behaviors once treated as a quantum many-body system. We exploit these characteristics to quantify the difference between sample types. Our findings show that the methodologies designed for field theory computations can be utilized in machine learning applications to employ theoretical approaches in data analysis techniques

    Searches for new physics with boosted top quarks in the MadAnalysis 5 and Rivet frameworks

    Get PDF
    High-momentum top quarks are a natural physical system in collider experiments for testing models of new physics, and jet substructure methods are key both to exploiting their largest decay mode and to assuaging resolution difficulties as the boosted system becomes increasingly collimated in the detector. To be used in new-physics interpretation studies, it is crucial that related methods get implemented in analysis frameworks allowing for the reinterpretation of the results of the LHC such as MadAnalysis 5 and Rivet. We describe the implementation of the HEPTopTagger algorithm in these two frameworks, and we exemplify the usage of the resulting functionalities to explore the sensitivity of boosted top reconstruction performance to new physics contributions from the Standard Model Effective Field Theory. The results of this study lead to important conclusions about the implicit assumption of Standard-Model-like top-quark decays in associated collider analyses, and for the prospects to constrain the Standard Model Effective Field Theory via kinematic observables built from boosted semi-leptonic ttˉt\bar{t} events selected using HEPTopTagger.Comment: 26 pages, 5 figure

    Signal region combination with full and simplified likelihoods in MadAnalysis 5

    Full text link
    The statistical combination of disjoint signal regions in reinterpretation studies uses more of the data of an analysis and gives more robust results than the single signal region approach. We present the implementation and usage of signal region combination in MadAnalysis 5 through two methods: an interface to the pyhf package making use of statistical models in JSON-serialised format provided by the ATLAS collaboration, and a simplified likelihood calculation making use of covariance matrices provided by the CMS collaboration. The gain in physics reach is demonstrated 1.) by comparison with official mass limits for 4 ATLAS and 5 CMS analyses from the Public Analysis Database of MadAnalysis 5 for which signal region combination is currently available, and 2.) by a case study for an MSSM scenario in which both stops and sbottoms can be produced and have a variety of decays into charginos and neutralinos.Comment: 29 pages, 12 figures; matches journal versio
    corecore