4,320 research outputs found
Automation of the matrix element reweighting method
Matrix element reweighting is a powerful experimental technique widely
employed to maximize the amount of information that can be extracted from a
collider data set. We present a procedure that allows to automatically evaluate
the weights for any process of interest in the standard model and beyond. Given
the initial, intermediate and final state particles, and the transfer functions
for the final physics objects, such as leptons, jets, missing transverse
energy, our algorithm creates a phase-space mapping designed to efficiently
perform the integration of the squared matrix element and the transfer
functions. The implementation builds up on MadGraph, it is completely
automatized and publicly available. A few sample applications are presented
that show the capabilities of the code and illustrate the possibilities for new
studies that such an approach opens up.Comment: 41 pages, 21 figure
Necessary conditions for variational regularization schemes
We study variational regularization methods in a general framework, more
precisely those methods that use a discrepancy and a regularization functional.
While several sets of sufficient conditions are known to obtain a
regularization method, we start with an investigation of the converse question:
How could necessary conditions for a variational method to provide a
regularization method look like? To this end, we formalize the notion of a
variational scheme and start with comparison of three different instances of
variational methods. Then we focus on the data space model and investigate the
role and interplay of the topological structure, the convergence notion and the
discrepancy functional. Especially, we deduce necessary conditions for the
discrepancy functional to fulfill usual continuity assumptions. The results are
applied to discrepancy functionals given by Bregman distances and especially to
the Kullback-Leibler divergence.Comment: To appear in Inverse Problem
The Residual Method for Regularizing Ill-Posed Problems
Although the \emph{residual method}, or \emph{constrained regularization}, is
frequently used in applications, a detailed study of its properties is still
missing. This sharply contrasts the progress of the theory of Tikhonov
regularization, where a series of new results for regularization in Banach
spaces has been published in the recent years. The present paper intends to
bridge the gap between the existing theories as far as possible. We develop a
stability and convergence theory for the residual method in general topological
spaces. In addition, we prove convergence rates in terms of (generalized)
Bregman distances, which can also be applied to non-convex regularization
functionals. We provide three examples that show the applicability of our
theory. The first example is the regularized solution of linear operator
equations on -spaces, where we show that the results of Tikhonov
regularization generalize unchanged to the residual method. As a second
example, we consider the problem of density estimation from a finite number of
sampling points, using the Wasserstein distance as a fidelity term and an
entropy measure as regularization term. It is shown that the densities obtained
in this way depend continuously on the location of the sampled points and that
the underlying density can be recovered as the number of sampling points tends
to infinity. Finally, we apply our theory to compressed sensing. Here, we show
the well-posedness of the method and derive convergence rates both for convex
and non-convex regularization under rather weak conditions.Comment: 29 pages, one figur
Lyman-alpha Emitters During the Early Stages of Reionization
We investigate the potential of exploiting Lya Emitters (LAEs) to constrain
the volume-weighted mean neutral hydrogen fraction of the IGM, x_H, at high
redshifts (specifically z~9). We use "semi-numerical'' simulations to
efficiently generate density, velocity, and halo fields at z=9 in a 250 Mpc
box, resolving halos with masses M>2.2e8 solar masses. We construct ionization
fields corresponding to various values of x_H. With these, we generate LAE
luminosity functions and "counts-in-cell'' statistics. As in previous studies,
we find that LAEs begin to disappear rapidly when x_H > 0.5. Constraining
x_H(z=9) with luminosity functions is difficult due to the many uncertainties
inherent in the host halo mass Lya luminosity mapping. However, using a
very conservative mapping, we show that the number densities derived using the
six z~9 LAEs recently discovered by Stark et al. (2007) imply x_H < 0.7. On a
more fundamental level, these LAE number densities, if genuine, require
substantial star formation in halos with M < 10^9 solar masses, making them
unique among the current sample of observed high-z objects. Furthermore,
reionization increases the apparent clustering of the observed LAEs. We show
that a ``counts-in-cell'' statistic is a powerful probe of this effect,
especially in the early stages of reionization. Specifically, we show that a
field of view (typical of upcoming IR instruments) containing LAEs has >10%
higher probability of containing more than one LAE in a x_H>0.5 universe than a
x_H=0 universe with the same overall number density. With this statistic, a
fully ionized universe can be robustly distinguished from one with x_H > 0.5
using a survey containing only ~ 20--100 galaxies.Comment: 14 pages, 13 figures, moderate changes to match version accepted for
publication in the MNRA
Nexus between quantum criticality and the chemical potential pinning in high- cuprates
For strongly correlated electrons the relation between total number of charge
carriers and the chemical potential reveals for large Coulomb
energy the apparently paradoxical pinning of within the Mott gap, as
observed in high- cuprates. By unravelling consequences of the non-trivial
topology of the charge gauge U(1) group and the associated ground state
degeneracy we found a close kinship between the pinning of and the
zero-temperature divergence of the charge compressibility , which marks a novel quantum criticality governed by
topological charges rather than Landau principle of the symmetry breaking.Comment: 4+ pages, 2 figures, typos corrected, version as publishe
MGOS: A library for molecular geometry and its operating system
The geometry of atomic arrangement underpins the structural understanding of molecules in many fields. However, no general framework of mathematical/computational theory for the geometry of atomic arrangement exists. Here we present "Molecular Geometry (MG)'' as a theoretical framework accompanied by "MG Operating System (MGOS)'' which consists of callable functions implementing the MG theory. MG allows researchers to model complicated molecular structure problems in terms of elementary yet standard notions of volume, area, etc. and MGOS frees them from the hard and tedious task of developing/implementing geometric algorithms so that they can focus more on their primary research issues. MG facilitates simpler modeling of molecular structure problems; MGOS functions can be conveniently embedded in application programs for the efficient and accurate solution of geometric queries involving atomic arrangements. The use of MGOS in problems involving spherical entities is akin to the use of math libraries in general purpose programming languages in science and engineering. (C) 2019 The Author(s). Published by Elsevier B.V
- …