16 research outputs found

    Machine-enhanced CP-asymmetries in the electroweak sector

    Get PDF
    The violation of charge conjugation ( C ) and parity ( P ) symmetries are a requirement for the observed dominance of matter over antimatter in the Universe. As an established effect of beyond the Standard Model physics, this could point towards additional C P violation in the Higgs-gauge sector. The phenomenological footprint of the associated anomalous couplings can be small, and designing measurement strategies with the highest sensitivity is therefore of the utmost importance in order to maximize the discovery potential of the Large Hadron Collider. There are, however, very few measurements of C P -sensitive observables in processes that probe the weak-boson self-interactions. In this article, we study the sensitivity to new sources of C P violation for a range of experimentally accessible electroweak processes, including W γ production, W W production via photon fusion, electroweak Z j j production, electroweak Z Z j j production, and electroweak W ± W ± j j production. We study simple angular observables as well C P -sensitive observables constructed using the outputs of machine-learning algorithms. We find that the machine-learning-constructed C P -sensitive observables improve the sensitivity to C P -violating effects by up to a factor of five, depending on the process. We also find that inclusive W γ and electroweak Z j j production have the potential to set the best possible constraints on certain C P -odd operators in the Higgs-gauge sector of dimension-six effective field theories

    Jet calibration, cross-section measurements and New Physics searches with the ATLAS experiment within the Run 2 data

    No full text
    The Standard Model (SM) is the current theory used to describe the elementary particles and their fundamental interactions (except the gravity). It successfully describes many observables that are measured precisely. Nevertheless, the SM cannot be the final theory of nature. My PhD within the ATLAS experiment put this model under test using objects called jets, to study the strong interaction (QCD). First, I contributed to a jet in-situ calibration method aiming at calibrating the energy scale of jets in the forward region of the detector relative to the jets in the central region, called the eta-intercalibration. The calibration is done as a function of pT\textit{p}_\text{T} and η\eta. For each pT\textit{p}_\text{T} bin, the jet responses in the different η\eta bins are found simultaneously through a minimization of one function that includes all the dijet events combinations in the different η\eta regions. Fast variations in the jet response were seen in the previous calibration which meant that the η\eta binning is not fine enough. The problem with the old numerical minimization method is that it becomes slow and sometimes does not converge when the number of bins/variables is large. I developed and implemented a new analytic minimization technique which is 1000 times faster and always converges. This allows improving the description of the peaks in the jet response as well as the closure of the method. Next, I compared the third jet modeling in different MC generators with data using the distributions of pTj3/pTavg\rm \textit{p}_\text{T}^{j3}/\textit{p}_\text{T}^{avg}. The different generators were found to describe less and less well the data the harder the third jet is. Hence, we now use a tighter selection cut on the pT\textit{p}_\text{T} of the third jet. The Powheg+Pythia8 generator is found to give softer third jets and the information was passed to the PMG group. On another point, the pile-up profile μ\mu in the different physics analysis is not necessarily the same as the one in the eta-intercalibration method. Hence, I verified the robustness of the calibration with respect to the pile-up conditions. By splitting the events into two groups on the basis of a μref\rm\mu_{ref}, I verified that the two calibrations are compatible within the statistical and the μ\mu- and NPV-dependent systematic uncertainties. Last, to maximize the available statistics, we use a combination between central and forward triggers. Using a simulation of the trigger efficiencies and prescales, I tested that the combination method used previously was biased due to an improper way of calculating the prescaling weight of the event. I replaced it with the inclusion method which was tested that it has no bias in weight calculations, but also found a residual bias when fitting the distributions that have asymmetric error bars. After implementing all those improvement, I derived the nominal values of the eta-intercalibration correction and the full uncertainties for EMTopo and PFlow jets that are used as part of the final Run II jet calibration. Second, I worked on a search analysis of new physics using events with two jets. The SM predicts a smooth distribution of the invariant mass of di-jets, hence we search for a bump which could come from a new resonance. Since no significant bump is found, we put limits on signals as predicted by Beyond Standard Model theories or on model-independent signals. For the latter, the limits were put previously on signals at reconstructed level which include the detector effects (resolution and acceptance). Those limits are hard to be re-interpreted by theorists. I developed and implemented a new method to put limits on model-independent signals at truth level using folding technique to factorize physics and detector effects. The detector effects are described through a MC-based transfer matrix, relating the truth and reconstructed observables. The truth distribution of the studied signal is then convoluted with the folding matrix to get the reconstructed distribution. The latter is then compared with data to set limits on the signal for which parameters are set at truth level. Several checks were done to validate the new method. I checked the closure using signals where the full simulation exists by comparing the reconstructed distribution from the simulation with the reconstructed distribution from the folding method and checking that the two are compatible within fluctuations. I also checked the effect of changing the physics model used to evaluate the transfer matrix on the limit values. The new folding technique was first included in the analysis using 2015+16 data. The analysis using the full Run II data is also using this technique, which is now also being propagated to studies using other final states. Third, I developed a new physics analysis measuring the pT\textit{p}_\text{T} leading jet differential cross-section as a function of transverse momentum and rapidity. Jet cross-section measurements are very important analyses used to test the SM and for indirect searches of BSM contributions. I performed all the development and implementation aspects of this analysis from the data measurement to the theoretical predictions evaluation. This new observable was proposed by theorists claiming to provide benefits over the inclusive jets observable: no physics correlation are lost and the scales choice is more natural. On the other hand, this analysis is much more complex and challenging on both the measurement and the predictions sides. On the measurement side, the jet transverse momenta are measured at the reconstructed level which includes the resolution effects. When unfolding to the truth level, we need to take into account jet pT\textit{p}_\text{T} order flips: what is a leading jet at reconstructed level can become a sub-leading jet at truth level and so on. To take into account this effect, I include in the transfer matrix used in the unfolding the jet orders in addition to the jet pT\textit{p}_\text{T} and rapidity. I verified that at least the first two leading jets should be considered to reduce the bias from missing order flips to negligible values. In addition to the central values of the leading truth jet distributions, I evaluated the systematic uncertainties related to the JES and JER uncertainties, the inefficiencies of jet cleaning and jet time cut (used to veto out-of-time pile-up jets), the luminosity uncertainties and the unfolding bias. The statistical uncertainties are evaluated using the bootstrap method to properly evaluate the correlations. On the predictions side, the challenge is that the observable is IR sensitive. For 2-to-2 (tree and one-loop) diagrams, the event contains two same-pT\textit{p}_\text{T} partons reconstructed as two same-pT\textit{p}_\text{T} jets and hence the leading jet observable is degenerate. In contrast, for real-emission diagrams, the degeneracy is broken even for very soft radiations. This difference between diagrams leads to an IR sensitivity which gives large statistical uncertainties and fluctuations of the cross-section. To counter that, I implemented a regularization which considers the observable as degenerate if the following condition is met: $\rm \textit{p}_\text{T}^{j1}-\textit{p}_\text{T}^{j2}3 (which is our selection). Comparing the LO to NLO precision predictions, we find the following. First, the cross-section values changes by more than a double in some bins when going from LO to NLO precision, which is large compared to the tensions that are observed. In fact, the LO precision predictions are smaller than data in the majority of bins, whereas the NLO ones are larger in most bins. It would be interesting, once the NNLO predictions are available, to see in which direction the variation will be. At the same time, the LO systematic uncertainties from the scales variations do not cover the difference between the LO and NLO predictions. This means that the uncertainties related to the missing higher orders are under-evaluated and increase the tensions with data. I also evaluated the sub-leading jet cross-sections at NLO precision and found that they become negative in the forward bins. This effect was seen previously by theorists in another context but inclusive in rapidity. They also observed that the NNLO predictions does not have this problem. Last, I also compared the data to truth MC distributions. The Sherpa generator gives the distributions the closest to data, whereas Pythia and Powheg+Pythia are significantly above data. To conclude, the data measurement results look consistent, while the theoretical predictions still need to be improved, most importantly as per my checks, producing the NNLO precision predictions (to be done by the theorist who developed the code since it is not yet public). Effectively, this measurement opens a series of interesting questions that are still to be addressed on the theory side

    Calibration des Jet, mesure des sections efficaces et recherche de nouvelles physiques avec les données du Run 2 de l’expérience ATLAS

    No full text
    The Standard Model is the current theory used to describe the elementary particles and their fundamental interactions (except the gravity). My PhD within the ATLAS experiment put this model under test using objects called jets, to study final state particles that interact through the strong force. First, I contributed to a method of jet calibration aiming at calibrating the energy scale of jets in the forward region of the detector with respect to central region. I improved the calibration by making it faster and more precise. Next, I worked on a search analysis of new physics using events with two jets. The Standard Model predicts a smooth distribution of the invariant mass of di-jets, hence we search for a bump which could come from a new particle. Since no significant bump is found, we put limits on signals as predicted by Beyond Standard Model theories and on model-independent signals. Last, I developed a new physics analysis measuring the leading (highest in transverse momentum) jet differential cross-section as a function of transverse momentum and rapidity. The challenge was to factorize the detector effects (resolution and acceptance) from the observable, which I did using a new unfolding technique. I also worked on the theoretical predictions calculation which was very challenging to perform and needed the implementation of special regularizations. The measurement and the predictions are then compared and tensions are observed due to the difficulties of theoretical predictions calculation.Le Modèle Standard est la théorie actuelle utilisée pour décrire les particules élémentaires et leurs interactions fondamentales (à l'exception de la gravité). Ma thèse au sein de l'expérience ATLAS met ce modèle sous test utilisant des objets appelés jets, pour étudier l’état final de particules qui interagissent à travers la force forte. Tout d'abord, j'ai contribué à une méthode d'étalonnage de jet visant à calibrer l'échelle d'énergie des jets dans la région avant du détecteur par rapport à la région centrale. J'ai amélioré la calibration en la rendant plus rapide et plus précise. Ensuite, j'ai travaillé sur une analyse de recherche de la nouvelle physique en utilisant des événements à deux jets. Le Modèle Standard prédit une distribution lisse de la masse invariante des bi-jets, d'où la recherche d'une bosse pouvant provenir d’une nouvelle particule. Comme aucune bosse significative n'est détectée, nous avons mis des limites sur des signaux prédits par des théories au delà du Modèle Standard. Enfin, j'ai développé une nouvelle analyse de physique mesurant la section efficace différentielle du jet ayant le plus haut moment transverse en fonction du moment transverse et de la rapidité. Le défi consistait à factoriser les effets du détecteur (résolution et acceptance) de l'observable, ce que j'ai fait en utilisant une nouvelle technique de déploiement. J'ai également travaillé sur le calcul des prédictions théoriques qui était très difficile à réaliser et nécessitait la mise en place de régularisations spéciales. La mesure et les prédictions sont ensuite comparées et des tensions, qui s’expliquent par la complexité de la prédiction, sont observées

    Machine-enhanced CP-asymmetries in the Higgs sector

    Get PDF
    Improving the sensitivity to CP-violation in the Higgs sector is one of the pillars of the precision Higgs programme at the Large Hadron Collider. We present a simple method that allows CP-sensitive observables to be directly constructed from the output of neural networks. We show that these observables have improved sensitivity to CP-violating effects in the production and decay of the Higgs boson, when compared to the use of traditional angular observables alone. The kinematic correlations identified by the neural networks can be used to design new analyses based on angular observables, with a similar improvement in sensitivity

    datasets

    No full text
    Simulated events used in the paper “Machine-enhanced CP-asymmetries in the Higgs sector (https://arxiv.org/abs/2112.05052). Datasets are available for the 4l and VBF Higgs processes, for the Standard Model prediction and the interference between the SM amplitude and the CP-odd dimension-six amplitude. A description of the data available in each dataset is given in the README.md fil

    Intramolecular Charge-Transfer Dynamics in Benzodifuran-Based Triads

    No full text
    A facile and efficient approach for the synthesis of new conjugated donor-π-acceptor (D-π-A) chromophores has been developed, in which benzodifuran (BDF) and/or triphenyl amine (TPA) units are the donor moieties, linked by ethylenic bridges to electron-deficient anthraquinone (AQ) and 11,11,12,12-tetracyano-9,10-anthraquinodimethane (TCAQ) as the acceptor moieties. The resultant triads either with a symmetric A-D-A or an asymmetric D’-D-A structure show intense absorption bands in the visible spectral region due to efficient intramolecular charge transfer (ICT) from the HOMO localized on the BDF core to the LUMO localized on the AQ or the TCAQ unit. Electronic interactions between these redox-active components were studied by a combination of cyclic voltammetry, spectroelectrochemistry, UV-visible and ultrafast transient absorption spectroscopy. Analysis of the femtosecond excited-state dynamics reveal that all triads undergo a rapid charge recombination process which occurs within a few picoseconds, indicating that ethylenic linkers can facilitate electron delocalization among BDF and AQ/TCAQ units and thus impart effective electronic interactions between them

    Measurements of ttˉt\bar{t} differential cross-sections of highly boosted top quarks decaying to all-hadronic final states in pppp collisions at s=13\sqrt{s}=13\, TeV using the ATLAS detector

    No full text
    Measurements are made of differential cross-sections of highly boosted pair-produced top quarks as a function of top-quark and ttˉt\bar{t} system kinematic observables using proton--proton collisions at a center-of-mass energy of s=13\sqrt{s} = 13 TeV. The data set corresponds to an integrated luminosity of 36.136.1 fb1^{-1}, recorded in 2015 and 2016 with the ATLAS detector at the CERN Large Hadron Collider. Events with two large-radius jets in the final state, one with transverse momentum pT>500p_{\rm T} > 500 GeV and a second with pT>350p_{\rm T}>350 GeV, are used for the measurement. The top-quark candidates are separated from the multijet background using jet substructure information and association with a bb-tagged jet. The measured spectra are corrected for detector effects to a particle-level fiducial phase space and a parton-level limited phase space, and are compared to several Monte Carlo simulations by means of calculated χ2\chi^2 values. The cross-section for ttˉt\bar{t} production in the fiducial phase-space region is 292±7 (stat)±76(syst)292 \pm 7 \ \rm{(stat)} \pm 76 \rm{(syst)} fb, to be compared to the theoretical prediction of 384±36384 \pm 36 fb

    Measurements of ttˉt\bar{t} differential cross-sections of highly boosted top quarks decaying to all-hadronic final states in pppp collisions at s=13\sqrt{s}=13\, TeV using the ATLAS detector

    No full text
    Measurements are made of differential cross-sections of highly boosted pair-produced top quarks as a function of top-quark and ttˉt\bar{t} system kinematic observables using proton--proton collisions at a center-of-mass energy of s=13\sqrt{s} = 13 TeV. The data set corresponds to an integrated luminosity of 36.136.1 fb1^{-1}, recorded in 2015 and 2016 with the ATLAS detector at the CERN Large Hadron Collider. Events with two large-radius jets in the final state, one with transverse momentum pT>500p_{\rm T} > 500 GeV and a second with pT>350p_{\rm T}>350 GeV, are used for the measurement. The top-quark candidates are separated from the multijet background using jet substructure information and association with a bb-tagged jet. The measured spectra are corrected for detector effects to a particle-level fiducial phase space and a parton-level limited phase space, and are compared to several Monte Carlo simulations by means of calculated χ2\chi^2 values. The cross-section for ttˉt\bar{t} production in the fiducial phase-space region is 292±7 (stat)±76(syst)292 \pm 7 \ \rm{(stat)} \pm 76 \rm{(syst)} fb, to be compared to the theoretical prediction of 384±36384 \pm 36 fb

    Observation of WWWWWW Production in pppp Collisions at s\sqrt s =13  TeV with the ATLAS Detector

    No full text
    International audienceThis Letter reports the observation of WWWWWW production and a measurement of its cross section using 139 fb1^{-1} of proton-proton collision data recorded at a center-of-mass energy of 13 TeV by the ATLAS detector at the Large Hadron Collider. Events with two same-sign leptons (electrons or muons) and at least two jets, as well as events with three charged leptons, are selected. A multivariate technique is then used to discriminate between signal and background events. Events from WWWWWW production are observed with a significance of 8.0 standard deviations, where the expectation is 5.4 standard deviations. The inclusive WWWWWW production cross section is measured to be 820±100(stat)±80(syst)820 \pm 100\,\text{(stat)} \pm 80\,\text{(syst)} fb, approximately 2.6 standard deviations from the predicted cross section of 511±18511 \pm 18 fb calculated at next-to-leading-order QCD and leading-order electroweak accuracy
    corecore