258 research outputs found
Probing the neutrino mass hierarchy with CMB weak lensing
We forecast constraints on cosmological parameters with primary CMB
anisotropy information and weak lensing reconstruction with a future
post-Planck CMB experiment, the Cosmic Origins Explorer (COrE), using
oscillation data on the neutrino mass splittings as prior information. Our MCMC
simulations in flat models with a non-evolving equation-of-state of dark energy
w give typical 68% upper bounds on the total neutrino mass of 0.136 eV and
0.098 eV for the inverted and normal hierarchies respectively, assuming the
total summed mass is close to the minimum allowed by the oscillation data for
the respective hierarchies (0.10 eV and 0.06 eV). Including information from
future baryon acoustic oscillation measurements with the complete BOSS, Type 1a
supernovae distance moduli from WFIRST, and a realistic prior on the Hubble
constant, these upper limits shrink to 0.118 eV and 0.080 eV for the inverted
and normal hierarchies, respectively. Addition of these distance priors also
yields percent-level constraints on w. We find tension between our MCMC results
and the results of a Fisher matrix analysis, most likely due to a strong
geometric degeneracy between the total neutrino mass, the Hubble constant, and
w in the unlensed CMB power spectra. If the minimal-mass, normal hierarchy were
realised in nature, the inverted hierarchy should be disfavoured by the full
data combination at typically greater than the 2-sigma level. For the
minimal-mass inverted hierarchy, we compute the Bayes' factor between the two
hierarchies for various combinations of our forecast datasets, and find that
the future probes considered here should be able to provide `strong' evidence
(odds ratio 12:1) for the inverted hierarchy. Finally, we consider potential
biases of the other cosmological parameters from assuming the wrong hierarchy
and find that all biases on the parameters are below their 1-sigma marginalised
errors.Comment: 16 pages, 13 figures; minor changes to match the published version,
references adde
Introduction to QCD
These lectures were originally given at TASI and are directed at a level
suitable for graduate students in High Energy Physics. They are intended to
give an introduction to the theory and phenomenology of quantum chromodynamics
(QCD), focusing on collider physics applications. The aim is to bring the
reader to a level where informed decisions can be made concerning different
approaches and their uncertainties. The material is divided into five main
areas: 1) fundamentals, 2) fixed-order perturbative QCD, 3) Monte Carlo event
generators and parton showers, 4) Matching at Leading and Next-to-Leading
Order, and 5) Soft QCD physics.Comment: Lecture notes from a course given at TASI 2012. Last update: July
2017. 85 pages, including index at the back. arXiv admin note: text overlap
with arXiv:1104.286
Event generation on quantum computers
The synthesis of high quality simulated data from event generators is essential in the search for new physics at collider experiments. Modern event generator algorithms use Monte Carlo processes to simulate the evolution of an event from the collision of high energy particles to the formation of long-lived particles. One of the major building blocks of the event generation process is the QCD parton shower. However, despite being a key aspect of modern event generation, the core algorithms which simulate the showering process have remained unchanged since the 1980s, and will become a limiting factor as we move to an era of higher energy and higher luminosity experiments.
With the rapid development of quantum computation, dedicated algorithms are required which exploit the potential that quantum computing provides to address problems in high energy physics. In this thesis, we present three novel quantum algorithms for the simulation of a QCD parton shower. The first algorithm provides a proof-of-principle, classical Monte Carlo inspired approach with the ability to simulate two shower steps of a collinear QCD model. By exploiting the compact circuit architecture of the quantum walk, one can drastically reduce the quantum resources required to simulate a shower step. The second algorithm shows that, in this framework, the quantum parton shower can be extended to simulate realistic shower depths whilst using fewer quantum resources. Finally, the third algorithm utilises a discrete QCD approach to parton showering to include kinematics in the shower, simulating a dipole cascade. In this construction, the algorithm has achieved the first data comparison between synthetic data produced using a Noisy Intermediate-Scale Quantum (NISQ) device, and ``real-life" archival collider data from the Large Electron Positron collider. The three algorithms represent the development of quantum algorithms for the simulation of parton showers, acting as a first step towards a fully quantum simulation of a high energy collision event.Open Acces
Les Houches Guidebook to Monte Carlo Generators for Hadron Collider Physics
Recently the collider physics community has seen significant advances in the
formalisms and implementations of event generators. This review is a primer of
the methods commonly used for the simulation of high energy physics events at
particle colliders. We provide brief descriptions, references, and links to the
specific computer codes which implement the methods. The aim is to provide an
overview of the available tools, allowing the reader to ascertain which tool is
best for a particular application, but also making clear the limitations of
each tool.Comment: 49 pages Latex. Compiled by the Working Group on Quantum
ChromoDynamics and the Standard Model for the Workshop ``Physics at TeV
Colliders'', Les Houches, France, May 2003. To appear in the proceeding
A Data-Analysis and Sensitivity-Optimization Framework for the KATRIN Experiment
Presently under construction, the Karlsruhe TRitium Neutrino (KATRIN) experiment is the next generation tritium beta-decay experiment to perform a direct kinematical measurement of the electron neutrino mass with an unprecedented sensitivity of 200 meV (90% C.L.). This thesis describes the implementation of a consistent data analysis framework, addressing technical aspects of the data taking process and statistical challenges of a neutrino mass estimation from the beta-decay electron spectrum
- âŠ