21,797 research outputs found
Topological Phases: An Expedition off Lattice
Motivated by the goal to give the simplest possible microscopic foundation
for a broad class of topological phases, we study quantum mechanical lattice
models where the topology of the lattice is one of the dynamical variables.
However, a fluctuating geometry can remove the separation between the system
size and the range of local interactions, which is important for topological
protection and ultimately the stability of a topological phase. In particular,
it can open the door to a pathology, which has been studied in the context of
quantum gravity and goes by the name of `baby universe', Here we discuss three
distinct approaches to suppressing these pathological fluctuations. We
complement this discussion by applying Cheeger's theory relating the geometry
of manifolds to their vibrational modes to study the spectra of Hamiltonians.
In particular, we present a detailed study of the statistical properties of
loop gas and string net models on fluctuating lattices, both analytically and
numerically.Comment: 38 pages, 22 figure
Top-quark mass effects in double and triple Higgs production in gluon-gluon fusion at NLO
The observation of double and triple scalar boson production at hadron
colliders could provide key information on the Higgs self couplings and the
potential. As for single Higgs production the largest rates for multiple Higgs
production come from gluon-gluon fusion processes mediated by a top-quark loop.
However, at variance with single Higgs production, top-quark mass and width
effects from the loops cannot be neglected. Computations including the exact
top-quark mass dependence are only available at the leading order, and
currently predictions at higher orders are obtained by means of approximations
based on the Higgs-gluon effective field theory (HEFT). In this work we present
a reweighting technique that, starting from events obtained via the MC@NLO
method in the HEFT, allows to exactly include the top-quark mass and width
effects coming from one- and two-loop amplitudes. We describe our approach and
apply it to double Higgs production at NLO in QCD, computing the needed
one-loop amplitudes and using approximations for the unknown two-loop ones. The
results are compared to other approaches used in the literature, arguing that
they provide more accurate predictions for distributions and for total rates as
well. As a novel application of our procedure we present predictions at NLO in
QCD for triple Higgs production at hadron colliders.Comment: 24 pages, 8 figure
Evaluating kernels on Xeon Phi to accelerate Gysela application
This work describes the challenges presented by porting parts ofthe Gysela
code to the Intel Xeon Phi coprocessor, as well as techniques used for
optimization, vectorization and tuning that can be applied to other
applications. We evaluate the performance of somegeneric micro-benchmark on Phi
versus Intel Sandy Bridge. Several interpolation kernels useful for the Gysela
application are analyzed and the performance are shown. Some memory-bound and
compute-bound kernels are accelerated by a factor 2 on the Phi device compared
to Sandy architecture. Nevertheless, it is hard, if not impossible, to reach a
large fraction of the peek performance on the Phi device,especially for
real-life applications as Gysela. A collateral benefit of this optimization and
tuning work is that the execution time of Gysela (using 4D advections) has
decreased on a standard architecture such as Intel Sandy Bridge.Comment: submitted to ESAIM proceedings for CEMRACS 2014 summer school version
reviewe
AutoParallel: A Python module for automatic parallelization and distributed execution of affine loop nests
The last improvements in programming languages, programming models, and
frameworks have focused on abstracting the users from many programming issues.
Among others, recent programming frameworks include simpler syntax, automatic
memory management and garbage collection, which simplifies code re-usage
through library packages, and easily configurable tools for deployment. For
instance, Python has risen to the top of the list of the programming languages
due to the simplicity of its syntax, while still achieving a good performance
even being an interpreted language. Moreover, the community has helped to
develop a large number of libraries and modules, tuning them to obtain great
performance.
However, there is still room for improvement when preventing users from
dealing directly with distributed and parallel computing issues. This paper
proposes and evaluates AutoParallel, a Python module to automatically find an
appropriate task-based parallelization of affine loop nests to execute them in
parallel in a distributed computing infrastructure. This parallelization can
also include the building of data blocks to increase task granularity in order
to achieve a good execution performance. Moreover, AutoParallel is based on
sequential programming and only contains a small annotation in the form of a
Python decorator so that anyone with little programming skills can scale up an
application to hundreds of cores.Comment: Accepted to the 8th Workshop on Python for High-Performance and
Scientific Computing (PyHPC 2018
- …