Jet calibration, cross-section measurements and New Physics searches with the ATLAS experiment within the Run 2 data

Abstract

The Standard Model (SM) is the current theory used to describe the elementary particles and their fundamental interactions (except the gravity). It successfully describes many observables that are measured precisely. Nevertheless, the SM cannot be the final theory of nature. My PhD within the ATLAS experiment put this model under test using objects called jets, to study the strong interaction (QCD). First, I contributed to a jet in-situ calibration method aiming at calibrating the energy scale of jets in the forward region of the detector relative to the jets in the central region, called the eta-intercalibration. The calibration is done as a function of pT\textit{p}_\text{T} and η\eta. For each pT\textit{p}_\text{T} bin, the jet responses in the different η\eta bins are found simultaneously through a minimization of one function that includes all the dijet events combinations in the different η\eta regions. Fast variations in the jet response were seen in the previous calibration which meant that the η\eta binning is not fine enough. The problem with the old numerical minimization method is that it becomes slow and sometimes does not converge when the number of bins/variables is large. I developed and implemented a new analytic minimization technique which is 1000 times faster and always converges. This allows improving the description of the peaks in the jet response as well as the closure of the method. Next, I compared the third jet modeling in different MC generators with data using the distributions of pTj3/pTavg\rm \textit{p}_\text{T}^{j3}/\textit{p}_\text{T}^{avg}. The different generators were found to describe less and less well the data the harder the third jet is. Hence, we now use a tighter selection cut on the pT\textit{p}_\text{T} of the third jet. The Powheg+Pythia8 generator is found to give softer third jets and the information was passed to the PMG group. On another point, the pile-up profile μ\mu in the different physics analysis is not necessarily the same as the one in the eta-intercalibration method. Hence, I verified the robustness of the calibration with respect to the pile-up conditions. By splitting the events into two groups on the basis of a μref\rm\mu_{ref}, I verified that the two calibrations are compatible within the statistical and the μ\mu- and NPV-dependent systematic uncertainties. Last, to maximize the available statistics, we use a combination between central and forward triggers. Using a simulation of the trigger efficiencies and prescales, I tested that the combination method used previously was biased due to an improper way of calculating the prescaling weight of the event. I replaced it with the inclusion method which was tested that it has no bias in weight calculations, but also found a residual bias when fitting the distributions that have asymmetric error bars. After implementing all those improvement, I derived the nominal values of the eta-intercalibration correction and the full uncertainties for EMTopo and PFlow jets that are used as part of the final Run II jet calibration. Second, I worked on a search analysis of new physics using events with two jets. The SM predicts a smooth distribution of the invariant mass of di-jets, hence we search for a bump which could come from a new resonance. Since no significant bump is found, we put limits on signals as predicted by Beyond Standard Model theories or on model-independent signals. For the latter, the limits were put previously on signals at reconstructed level which include the detector effects (resolution and acceptance). Those limits are hard to be re-interpreted by theorists. I developed and implemented a new method to put limits on model-independent signals at truth level using folding technique to factorize physics and detector effects. The detector effects are described through a MC-based transfer matrix, relating the truth and reconstructed observables. The truth distribution of the studied signal is then convoluted with the folding matrix to get the reconstructed distribution. The latter is then compared with data to set limits on the signal for which parameters are set at truth level. Several checks were done to validate the new method. I checked the closure using signals where the full simulation exists by comparing the reconstructed distribution from the simulation with the reconstructed distribution from the folding method and checking that the two are compatible within fluctuations. I also checked the effect of changing the physics model used to evaluate the transfer matrix on the limit values. The new folding technique was first included in the analysis using 2015+16 data. The analysis using the full Run II data is also using this technique, which is now also being propagated to studies using other final states. Third, I developed a new physics analysis measuring the pT\textit{p}_\text{T} leading jet differential cross-section as a function of transverse momentum and rapidity. Jet cross-section measurements are very important analyses used to test the SM and for indirect searches of BSM contributions. I performed all the development and implementation aspects of this analysis from the data measurement to the theoretical predictions evaluation. This new observable was proposed by theorists claiming to provide benefits over the inclusive jets observable: no physics correlation are lost and the scales choice is more natural. On the other hand, this analysis is much more complex and challenging on both the measurement and the predictions sides. On the measurement side, the jet transverse momenta are measured at the reconstructed level which includes the resolution effects. When unfolding to the truth level, we need to take into account jet pT\textit{p}_\text{T} order flips: what is a leading jet at reconstructed level can become a sub-leading jet at truth level and so on. To take into account this effect, I include in the transfer matrix used in the unfolding the jet orders in addition to the jet pT\textit{p}_\text{T} and rapidity. I verified that at least the first two leading jets should be considered to reduce the bias from missing order flips to negligible values. In addition to the central values of the leading truth jet distributions, I evaluated the systematic uncertainties related to the JES and JER uncertainties, the inefficiencies of jet cleaning and jet time cut (used to veto out-of-time pile-up jets), the luminosity uncertainties and the unfolding bias. The statistical uncertainties are evaluated using the bootstrap method to properly evaluate the correlations. On the predictions side, the challenge is that the observable is IR sensitive. For 2-to-2 (tree and one-loop) diagrams, the event contains two same-pT\textit{p}_\text{T} partons reconstructed as two same-pT\textit{p}_\text{T} jets and hence the leading jet observable is degenerate. In contrast, for real-emission diagrams, the degeneracy is broken even for very soft radiations. This difference between diagrams leads to an IR sensitivity which gives large statistical uncertainties and fluctuations of the cross-section. To counter that, I implemented a regularization which considers the observable as degenerate if the following condition is met: $\rm \textit{p}_\text{T}^{j1}-\textit{p}_\text{T}^{j2}3 (which is our selection). Comparing the LO to NLO precision predictions, we find the following. First, the cross-section values changes by more than a double in some bins when going from LO to NLO precision, which is large compared to the tensions that are observed. In fact, the LO precision predictions are smaller than data in the majority of bins, whereas the NLO ones are larger in most bins. It would be interesting, once the NNLO predictions are available, to see in which direction the variation will be. At the same time, the LO systematic uncertainties from the scales variations do not cover the difference between the LO and NLO predictions. This means that the uncertainties related to the missing higher orders are under-evaluated and increase the tensions with data. I also evaluated the sub-leading jet cross-sections at NLO precision and found that they become negative in the forward bins. This effect was seen previously by theorists in another context but inclusive in rapidity. They also observed that the NNLO predictions does not have this problem. Last, I also compared the data to truth MC distributions. The Sherpa generator gives the distributions the closest to data, whereas Pythia and Powheg+Pythia are significantly above data. To conclude, the data measurement results look consistent, while the theoretical predictions still need to be improved, most importantly as per my checks, producing the NNLO precision predictions (to be done by the theorist who developed the code since it is not yet public). Effectively, this measurement opens a series of interesting questions that are still to be addressed on the theory side

    Similar works

    Full text

    thumbnail-image

    Available Versions