28,465 research outputs found
Differential cross sections for high energy elastic hadron-hadron scattering in nonperturbative QCD
Total and differential cross sections for high energy and small momentum
transfer elastic hadron-hadron scattering are studied in QCD using a functional
integral approach. The hadronic amplitudes are governed by vacuum expectation
values of lightlike Wegner-Wilson loops, for which a matrix cumulant expansion
is derived. The cumulants are evaluated within the framework of the Minkowskian
version of the model of the stochastic vacuum. Using the second cumulant, we
calculate elastic differential cross sections for hadron-hadron scattering. The
agreement with experimental data is good.Comment: 30 pages, 14 figure
Training samples in objective Bayesian model selection
Central to several objective approaches to Bayesian model selection is the
use of training samples (subsets of the data), so as to allow utilization of
improper objective priors. The most common prescription for choosing training
samples is to choose them to be as small as possible, subject to yielding
proper posteriors; these are called minimal training samples.
When data can vary widely in terms of either information content or impact on
the improper priors, use of minimal training samples can be inadequate.
Important examples include certain cases of discrete data, the presence of
censored observations, and certain situations involving linear models and
explanatory variables. Such situations require more sophisticated methods of
choosing training samples. A variety of such methods are developed in this
paper, and successfully applied in challenging situations
Optimal predictive model selection
Often the goal of model selection is to choose a model for future prediction,
and it is natural to measure the accuracy of a future prediction by squared
error loss. Under the Bayesian approach, it is commonly perceived that the
optimal predictive model is the model with highest posterior probability, but
this is not necessarily the case. In this paper we show that, for selection
among normal linear models, the optimal predictive model is often the median
probability model, which is defined as the model consisting of those variables
which have overall posterior probability greater than or equal to 1/2 of being
in a model. The median probability model often differs from the highest
probability model
Bayes and empirical-Bayes multiplicity adjustment in the variable-selection problem
This paper studies the multiplicity-correction effect of standard Bayesian
variable-selection priors in linear regression. Our first goal is to clarify
when, and how, multiplicity correction happens automatically in Bayesian
analysis, and to distinguish this correction from the Bayesian Ockham's-razor
effect. Our second goal is to contrast empirical-Bayes and fully Bayesian
approaches to variable selection through examples, theoretical results and
simulations. Considerable differences between the two approaches are found. In
particular, we prove a theorem that characterizes a surprising aymptotic
discrepancy between fully Bayes and empirical Bayes. This discrepancy arises
from a different source than the failure to account for hyperparameter
uncertainty in the empirical-Bayes estimate. Indeed, even at the extreme, when
the empirical-Bayes estimate converges asymptotically to the true
variable-inclusion probability, the potential for a serious difference remains.Comment: Published in at http://dx.doi.org/10.1214/10-AOS792 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Empirical likelihood confidence intervals for complex sampling designs
We define an empirical likelihood approach which gives consistent design-based confidence intervals which can be calculated without the need of variance estimates, design effects, resampling, joint inclusion probabilities and linearization, even when the point estimator is not linear. It can be used to construct confidence intervals for a large class of sampling designs and estimators which are solutions of estimating equations. It can be used for means, regressions coefficients, quantiles, totals or counts even when the population size is unknown. It can be used with large sampling fractions and naturally includes calibration constraints. It can be viewed as an extension of the empirical likelihood approach to complex survey data. This approach is computationally simpler than the pseudoempirical likelihood and the bootstrap approaches. The simulation study shows that the confidence interval proposed may give better coverages than the confidence intervals based on linearization, bootstrap and pseudoempirical likelihood. Our simulation study shows that, under complex sampling designs, standard confidence intervals based on normality may have poor coverages, because point estimators may not follow a normal sampling distribution and their variance estimators may be biased.<br/
Posterior propriety and admissibility of hyperpriors in normal hierarchical models
Hierarchical modeling is wonderful and here to stay, but hyperparameter
priors are often chosen in a casual fashion. Unfortunately, as the number of
hyperparameters grows, the effects of casual choices can multiply, leading to
considerably inferior performance. As an extreme, but not uncommon, example use
of the wrong hyperparameter priors can even lead to impropriety of the
posterior. For exchangeable hierarchical multivariate normal models, we first
determine when a standard class of hierarchical priors results in proper or
improper posteriors. We next determine which elements of this class lead to
admissible estimators of the mean under quadratic loss; such considerations
provide one useful guideline for choice among hierarchical priors. Finally,
computational issues with the resulting posterior distributions are addressed.Comment: Published at http://dx.doi.org/10.1214/009053605000000075 in the
Annals of Statistics (http://www.imstat.org/aos/) by the Institute of
Mathematical Statistics (http://www.imstat.org
- …