35 research outputs found

    Signal mixture estimation for degenerate heavy Higgses using a deep neural network

    Get PDF
    If a new signal is established in future LHC data, a next question will be to determine the signal composition, in particular whether the signal is due to multiple near-degenerate states. We investigate the performance of a deep learning approach to signal mixture estimation for the challenging scenario of a ditau signal coming from a pair of degenerate Higgs bosons of opposite CP charge. This constitutes a parameter estimation problem for a mixture model with highly overlapping features. We use an unbinned maximum likelihood fit to a neural network output, and compare the results to mixture estimation via a fit to a single kinematic variable. For our benchmark scenarios we find a ~20% improvement in the estimate uncertainty.Comment: v2, 12 pages, 7 figures, published in EPJ

    Concept backpropagation: An Explainable AI approach for visualising learned concepts in neural network models

    Full text link
    Neural network models are widely used in a variety of domains, often as black-box solutions, since they are not directly interpretable for humans. The field of explainable artificial intelligence aims at developing explanation methods to address this challenge, and several approaches have been developed over the recent years, including methods for investigating what type of knowledge these models internalise during the training process. Among these, the method of concept detection, investigates which \emph{concepts} neural network models learn to represent in order to complete their tasks. In this work, we present an extension to the method of concept detection, named \emph{concept backpropagation}, which provides a way of analysing how the information representing a given concept is internalised in a given neural network model. In this approach, the model input is perturbed in a manner guided by a trained concept probe for the described model, such that the concept of interest is maximised. This allows for the visualisation of the detected concept directly in the input space of the model, which in turn makes it possible to see what information the model depends on for representing the described concept. We present results for this method applied to a various set of input modalities, and discuss how our proposed method can be used to visualise what information trained concept probes use, and the degree as to which the representation of the probed concept is entangled within the neural network model itself

    Vacuum free energy, quark condensate shifts and magnetization in three-flavor chiral perturbation theory to O(p6)\mathcal{O}(p^6) in a uniform magnetic field

    Full text link
    We study three-flavor QCD in a uniform magnetic field using chiral perturbation theory (χ\chiPT). We construct the vacuum free energy density, quark condensate shifts induced by the magnetic field and the renormalized magnetization to O(p6)\mathcal{O}(p^6) in the chiral expansion. We find that the calculation of the free energy is greatly simplified by cancellations among two-loop diagrams involving charged mesons. In comparing our results with recent 2+12+1-flavor lattice QCD data, we find that the light quark condensate shift at O(p6)\mathcal{O}(p^6) is in better agreement than the shift at O(p4)\mathcal{O}(p^4). We also find that the renormalized magnetization, due to its smallness, possesses large uncertainties at O(p6)\mathcal{O}(p^{6}) due to the uncertainties in the low-energy constants.Comment: 23 pages, 3 sets of figures, matches published versio

    Trilinear-Augmented Gaugino Mediation

    Get PDF
    We consider a gaugino-mediated supersymmetry breaking scenario where in addition to the gauginos the Higgs fields couple directly to the field that breaks supersymmetry. This yields non-vanishing trilinear scalar couplings in general, which can lead to large mixing in the stop sector providing a sufficiently large Higgs mass. Using the most recent release of FeynHiggs, we show the implications on the parameter space. Assuming a gravitino LSP, we find allowed points with a neutralino, sneutrino or stau NLSP. We test these points against the results of Run 1 of the LHC, considering in particular searches for heavy stable charged particles.Comment: 13 pages + references, 4 figures, v4: corrected plot labels in figs. 1-

    Trilinear-augmented gaugino mediation

    Get PDF
    We consider a gaugino-mediated supersymmetry breaking scenario where in addition to the gauginos the Higgs fields couple directly to the field that breaks supersymmetry. This yields non-vanishing trilinear scalar couplings in general, which can lead to large mixing in the stop sector providing a sufficiently large Higgs mass. Using the most recent release of FeynHiggs, we show the implications on the parameter space. Assuming a gravitino LSP, we find allowed points with a neutralino, sneutrino or stau NLSP. We test these points against the results of Run 1 of the LHC, considering in particular searches for heavy stable charged particles.publishedVersio

    To Explain or Not to Explain?—Artificial Intelligence Explainability in Clinical Decision Support Systems

    Get PDF
    Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used in the emergency call setting to identify patients with life-threatening cardiac arrest. More specifically, we performed a normative analysis using socio-technical scenarios to provide a nuanced account of the role of explainability for CDSSs for the concrete use case, allowing for abstractions to a more general level. Our analysis focused on three layers: technical considerations, human factors, and the designated system role in decision-making. Our findings suggest that whether explainability can provide added value to CDSS depends on several key questions: technical feasibility, the level of validation in case of explainable algorithms, the characteristics of the context in which the system is implemented, the designated role in the decision-making process, and the key user group(s). Thus, each CDSS will require an individualized assessment of explainability needs and we provide an example of how such an assessment could look like in practice
    corecore