8 research outputs found

    Pseudoscalar top-Higgs coupling: Exploration of CP\mathrm{CP}-odd observables to resolve the sign ambiguity

    Get PDF
    We present a collection of CP\mathrm{CP}-odd observables for the process ppt(b+ν)tˉ(bˉνˉ)Hpp\to t\,\left(\rightarrow b {\ell}^+ \nu_{\ell}\right) \bar{t}\,\left(\rightarrow \bar{b} {\ell}^-{\bar{\nu}}_{\ell}\right)\,H that are linearly dependent on the scalar (κt\kappa_t) and pseudoscalar (κ~t\tilde{\kappa}_t) top-Higgs coupling and hence sensitive to the corresponding relative sign. The proposed observables are based on triple product (TP) correlations that we extract from the expression for the differential cross section in terms of the spin vectors of the top and antitop quarks. In order to explore other possibilities, we progressively modify these TPs, first by combining them, and then by replacing the spin vectors by the lepton momenta or the tt and tˉ\bar{t} momenta by their visible parts. We generate Monte Carlo data sets for several benchmark scenarios, including the Standard Model (κt=1\kappa_t=1, κ~t=0\tilde{\kappa}_t=0) and two scenarios with mixed CP\mathrm{CP} properties (κt=1\kappa_t=1, κ~t=±1\tilde{\kappa}_t=\pm 1). Assuming an integrated luminosity that is consistent with that envisioned for the High Luminosity Large Hadron Collider, using Monte Carlo-truth and taking into account only statistical uncertainties, we find that the most promising observable can disentangle the "CP\mathrm{CP}-mixed" scenarios with an effective separation of 19σ\sim 19\sigma. In the case of observables that do not require the reconstruction of the tt and tˉ\bar{t} momenta, the power of discrimination is up to 13σ\sim 13\sigma for the same number of events. We also show that the most promising observables can still disentangle the CP\mathrm{CP}-mixed scenarios when the number of events is reduced to values consistent with expectations for the Large Hadron Collider in the near term.Comment: 28 pages, 7 figures. Published versio

    Probing sensitivity to charged scalars through partial differential widths: τKππντ\tau\rightarrow K\pi\pi\nu_{\tau} decays

    Get PDF
    We define and test CP\mathrm{CP}-even and CP\mathrm{CP}-odd partial differential widths for the process τKππντ\tau\rightarrow K\pi\pi\nu_{\tau} assuming that an intermediate heavy charged scalar contributes to the decay amplitude. Adopting a model-independent approach, we use a Monte Carlo simulation in order to study the number of events needed to recover information on the new physics from these observables. Our analysis of the CP\mathrm{CP}-odd observables indicates that the magnitude of fHηPf_H \eta_P, which is related to the new-physics contribution, can be recovered with an uncertainty smaller than 33% for 3×1063\times 10^6 events. This number of events would also allow one to retrieve certain parameters appearing in the SM amplitude at the percent level. In addition, we discuss the possibility of using the proposed observables to study specific models involving two Higgs doublets, such as the aligned two-Higgs-doublet model (A2HDM). This analysis is undertaken within the context of the upcoming Super B-factories, which are expected to provide a considerably larger number of events than that which was supplied by the B-factories. Moreover, a similar set of observables could be employed to study other decay modes such as τπππντ,τKKπντ\tau\rightarrow\pi\pi\pi\nu_{\tau},\,\tau\rightarrow KK\pi\nu_{\tau} and τKKKντ\tau\rightarrow KKK\nu_{\tau}.Comment: 29 pages, 4 figures, published versio

    Machine-Learning Performance on Higgs-Pair Production associated with Dark Matter at the LHC

    Full text link
    Di-Higgs production at the LHC associated with missing transverse energy is explored in the context of simplified models that generically parameterize a large class of models with heavy scalars and dark matter candidates. Our aim is to figure out the improvement capability of machine-learning tools over traditional cut-based analyses. In particular, boosted decision trees and neural networks are implemented in order to determine the parameter space that can be tested at the LHC demanding four bb-jets and large missing energy in the final state. We present a performance comparison between both machine-learning algorithms, based on the maximum significance reached, by feeding them with different sets of kinematic features corresponding to the LHC at a center-of-mass energy of 14 TeV. Both algorithms present very similar performances and substantially improve traditional analyses, being sensitive to most of the parameter space considered for a total integrated luminosity of 1 ab1^{-1}, with significances at the evidence level, and even at the discovery level, depending on the masses of the new heavy scalars. A more conservative approach with systematic uncertainties on the background of 30\% has also been contemplated, again providing very promising significances.Comment: 33 pages, 8 figures, 7 tables, 2 appendice

    Discovery and exclusion prospects for staus produced by heavy Higgs bosons decays at the LHC

    No full text
    In a previous work we developed a search strategy for staus produced by the decay of the heavy CP-even Higgs boson HH within the context of the large tanβ\tan\beta regime of the minimal supersymmetric standard model (MSSM) in an scenario of large stau mixing. Here we study the performance of such search strategy by confronting it with the complementary mixing pattern in which decays of both the CP-even and CP-odd heavy Higgs bosons contribute to the production of τ~1τ~2  +  c.c\widetilde{\tau}_1\widetilde{\tau}_2^{*} \;+\; c.c pairs. Again, we focus on final states with two opposite-sign tau leptons and large missing transverse energy. We find that our proposed search strategy, although optimized for the large stau mixing scenario, is still quite sensitive to the complementary mixing pattern. For instance, with a total integrated luminosity of only 100 fb1^{-1} we are able to exclude heavy Higgs masses above 850 GeV for average stau masses higher than 290 GeV. We also extend the results reported in the preceding work for the large mixing scenario by including now the exclusion limits at 100 fb1^{-1} and the prospects both for exclusion and discovery in a potential high luminosity phase of the LHC (1000 fb1^{-1}). Finally, we discuss the possibility to distinguish the two mixing scenarios when they share the same relevant mass spectrum and both reach the discovery level with our search strategy

    Potential discovery of staus through heavy Higgs boson decays at the LHC

    Get PDF
    Abstract In this work we present a new search strategy for the discovery of staus at the LHC in the context of the minimal supersymmetric standard model. The search profits from the large s-channel b-quark annihilation production of the heavy CP-even and CP-odd Higgs bosons (H/A) which can be attained in regions of tan β ≫ 1 that avoid the stringent H/A → τ + τ − searches via decays into stau pairs. We also focus on regions where the staus branching ratios are dominated by the decays into a tau lepton and the lightest neutralino. Thus the experimental signature consists of final states made up of a tau-lepton pair plus large missing transverse energy. We take advantage of the large stau-pair production cross sections via heavy Higgs boson decays, which are between one or two orders of magnitude larger than the usual electroweak production cross sections for staus. A set of basic cuts allow us to obtain significances of the signal over the SM backgrounds at the discovery level (5 standard deviations) in the next LHC run with a center-of-mass energy of 14 TeV and a total integrated luminosity of only 100 fb−1
    corecore