8,590 research outputs found
Inclusive Jet Cross Sections in pbarp Collisions at 630 and 1800 GeV
We have made a precise measurement of the inclusive jet cross section at 1800
GeV. The result is based on an integrated luminosity of 92 pb**-1 collected at
the Fermilab Tevatron pbarp Collider with the DO detector. The measurement is
reported as a function of jet transverse energy (60 GeV < ET < 500 GeV), and in
the pseudorapidity intervals |eta|<0.5 and 0.1<eta<0.7. A preliminary
measurement of the pseudorapidity dependence of inclusive jet production
(eta<1.5) is also discussed. The results are in good agreement with predictions
from next-to-leading order (NLO) quantum chromodynamics (QCD). D\O has also
determined the ratio of jet cross sections at =630 GeV and
=1800 GeV (). This preliminary measurement differs
from NLO QCD predictions.Comment: 3 pages, 3 figures, conference (DIS99, Zeuthen, Germany
QCD at the Tevatron: Jets and Fragmentation
At the Fermilab Tevatron energies, (=1800 GeV and =630
GeV), jet production is the dominant process. During the period 1992-1996, the
D0 and CDF experiments accumulated almost 100 pb**-1 of data and performed the
most accurate jet production measurements up to this date. These measurements
and the NLO-QCD theoretical predictions calculated during the last decade, have
improved our understanding of QCD, our knowledge of the proton structure, and
pushed the limit to the scale associated with quark compositeness to 2.4-2.7
TeV. In this paper, we present the most recent published and preliminary
measurements on jet production and fragmentation by the D0 and CDF
collaborations.Comment: 11 pages, 16 figures, Physics in Collisions Conference 2000 (Lisbon,
Portugal
Effective Sample Size for Importance Sampling based on discrepancy measures
The Effective Sample Size (ESS) is an important measure of efficiency of
Monte Carlo methods such as Markov Chain Monte Carlo (MCMC) and Importance
Sampling (IS) techniques. In the IS context, an approximation
of the theoretical ESS definition is widely applied, involving the inverse of
the sum of the squares of the normalized importance weights. This formula,
, has become an essential piece within Sequential Monte Carlo
(SMC) methods, to assess the convenience of a resampling step. From another
perspective, the expression is related to the Euclidean
distance between the probability mass described by the normalized weights and
the discrete uniform probability mass function (pmf). In this work, we derive
other possible ESS functions based on different discrepancy measures between
these two pmfs. Several examples are provided involving, for instance, the
geometric mean of the weights, the discrete entropy (including theperplexity
measure, already proposed in literature) and the Gini coefficient among others.
We list five theoretical requirements which a generic ESS function should
satisfy, allowing us to classify different ESS measures. We also compare the
most promising ones by means of numerical simulations
Group Importance Sampling for Particle Filtering and MCMC
Bayesian methods and their implementations by means of sophisticated Monte
Carlo techniques have become very popular in signal processing over the last
years. Importance Sampling (IS) is a well-known Monte Carlo technique that
approximates integrals involving a posterior distribution by means of weighted
samples. In this work, we study the assignation of a single weighted sample
which compresses the information contained in a population of weighted samples.
Part of the theory that we present as Group Importance Sampling (GIS) has been
employed implicitly in different works in the literature. The provided analysis
yields several theoretical and practical consequences. For instance, we discuss
the application of GIS into the Sequential Importance Resampling framework and
show that Independent Multiple Try Metropolis schemes can be interpreted as a
standard Metropolis-Hastings algorithm, following the GIS approach. We also
introduce two novel Markov Chain Monte Carlo (MCMC) techniques based on GIS.
The first one, named Group Metropolis Sampling method, produces a Markov chain
of sets of weighted samples. All these sets are then employed for obtaining a
unique global estimator. The second one is the Distributed Particle
Metropolis-Hastings technique, where different parallel particle filters are
jointly used to drive an MCMC algorithm. Different resampled trajectories are
compared and then tested with a proper acceptance probability. The novel
schemes are tested in different numerical experiments such as learning the
hyperparameters of Gaussian Processes, two localization problems in a wireless
sensor network (with synthetic and real data) and the tracking of vegetation
parameters given satellite observations, where they are compared with several
benchmark Monte Carlo techniques. Three illustrative Matlab demos are also
provided.Comment: To appear in Digital Signal Processing. Related Matlab demos are
provided at https://github.com/lukafree/GIS.gi
Parallel Metropolis chains with cooperative adaptation
Monte Carlo methods, such as Markov chain Monte Carlo (MCMC) algorithms, have
become very popular in signal processing over the last years. In this work, we
introduce a novel MCMC scheme where parallel MCMC chains interact, adapting
cooperatively the parameters of their proposal functions. Furthermore, the
novel algorithm distributes the computational effort adaptively, rewarding the
chains which are providing better performance and, possibly even stopping other
ones. These extinct chains can be reactivated if the algorithm considers
necessary. Numerical simulations shows the benefits of the novel scheme
Orthogonal parallel MCMC methods for sampling and optimization
Monte Carlo (MC) methods are widely used for Bayesian inference and
optimization in statistics, signal processing and machine learning. A
well-known class of MC methods are Markov Chain Monte Carlo (MCMC) algorithms.
In order to foster better exploration of the state space, specially in
high-dimensional applications, several schemes employing multiple parallel MCMC
chains have been recently introduced. In this work, we describe a novel
parallel interacting MCMC scheme, called {\it orthogonal MCMC} (O-MCMC), where
a set of "vertical" parallel MCMC chains share information using some
"horizontal" MCMC techniques working on the entire population of current
states. More specifically, the vertical chains are led by random-walk
proposals, whereas the horizontal MCMC techniques employ independent proposals,
thus allowing an efficient combination of global exploration and local
approximation. The interaction is contained in these horizontal iterations.
Within the analysis of different implementations of O-MCMC, novel schemes in
order to reduce the overall computational cost of parallel multiple try
Metropolis (MTM) chains are also presented. Furthermore, a modified version of
O-MCMC for optimization is provided by considering parallel simulated annealing
(SA) algorithms. Numerical results show the advantages of the proposed sampling
scheme in terms of efficiency in the estimation, as well as robustness in terms
of independence with respect to initial values and the choice of the
parameters
A generic framework for the analysis and specialization of logic programs
The relationship between abstract interpretation and partial
deduction has received considerable attention and (partial) integrations have been proposed starting from both the partial deduction and abstract interpretation perspectives. In this work we present what we argüe is the first fully described generic algorithm for efñcient and precise integration of abstract interpretation and partial deduction. Taking as starting point state-of-the-art algorithms for context-sensitive, polyvariant abstract interpretation and (abstract) partial deduction, we present
an algorithm which combines the best of both worlds. Key ingredients include the accurate success propagation inherent to abstract interpretation and the powerful program transformations achievable by partial deduction. In our algorithm, the calis which appear in the analysis graph
are not analyzed w.r.t. the original definition of the procedure but w.r.t. specialized definitions of these procedures. Such specialized definitions are obtained by applying both unfolding and abstract executability. Our framework is parametric w.r.t. different control strategies and abstract domains. Different combinations of such parameters correspond to existing algorithms for program analysis and specialization. Simultaneously, our approach opens the door to the efñcient computation of strictly more
precise results than those achievable by each of the individual techniques.
The algorithm is now one of the key components of the CiaoPP analysis
and specialization system
Abstract Interpretation-based verification/certification in the ciaoPP system
CiaoPP is the abstract interpretation-based preprocessor of
the Ciao multi-paradigm (Constraint) Logic Programming system. It uses modular, incremental abstract interpretation as a fundamental tool to obtain information about programs. In CiaoPP, the semantic approximations thus produced have been applied to perform high- and low-level optimizations during program compilation, including transformations such as múltiple abstract specialization, parallelization, partial evaluation, resource usage control, and program verification. More recently, novel and promising applications of such semantic approximations are
being applied in the more general context of program development such as program verification. In this work, we describe our extensión of the system to incorpórate Abstraction-Carrying Code (ACC), a novel approach to mobile code safety. ACC follows the standard strategy of associating safety certificates to programs, originally proposed in Proof Carrying- Code. A distinguishing feature of ACC is that we use an abstraction (or abstract model) of the program computed by standard static analyzers as a certifÃcate. The validity of the abstraction on the consumer side is checked in a single-pass by a very efficient and specialized abstractinterpreter. We have implemented and benchmarked ACC within CiaoPP. The experimental results show that the checking phase is indeed faster than the proof generation phase, and that the sizes of certificates are reasonable. Moreover, the preprocessor is based on compile-time (and run-time) tools for the certification of CLP programs with resource consumption assurances
- …