12,907 research outputs found
Towards Machine Wald
The past century has seen a steady increase in the need of estimating and
predicting complex systems and making (possibly critical) decisions with
limited information. Although computers have made possible the numerical
evaluation of sophisticated statistical models, these models are still designed
\emph{by humans} because there is currently no known recipe or algorithm for
dividing the design of a statistical model into a sequence of arithmetic
operations. Indeed enabling computers to \emph{think} as \emph{humans} have the
ability to do when faced with uncertainty is challenging in several major ways:
(1) Finding optimal statistical models remains to be formulated as a well posed
problem when information on the system of interest is incomplete and comes in
the form of a complex combination of sample data, partial knowledge of
constitutive relations and a limited description of the distribution of input
random variables. (2) The space of admissible scenarios along with the space of
relevant information, assumptions, and/or beliefs, tend to be infinite
dimensional, whereas calculus on a computer is necessarily discrete and finite.
With this purpose, this paper explores the foundations of a rigorous framework
for the scientific computation of optimal statistical estimators/models and
reviews their connections with Decision Theory, Machine Learning, Bayesian
Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty
Quantification and Information Based Complexity.Comment: 37 page
Likelihood Inference for Models with Unobservables: Another View
There have been controversies among statisticians on (i) what to model and
(ii) how to make inferences from models with unobservables. One such
controversy concerns the difference between estimation methods for the marginal
means not necessarily having a probabilistic basis and statistical models
having unobservables with a probabilistic basis. Another concerns
likelihood-based inference for statistical models with unobservables. This
needs an extended-likelihood framework, and we show how one such extension,
hierarchical likelihood, allows this to be done. Modeling of unobservables
leads to rich classes of new probabilistic models from which likelihood-type
inferences can be made naturally with hierarchical likelihood.Comment: This paper discussed in: [arXiv:1010.0804], [arXiv:1010.0807],
[arXiv:1010.0810]. Rejoinder at [arXiv:1010.0814]. Published in at
http://dx.doi.org/10.1214/09-STS277 the Statistical Science
(http://www.imstat.org/sts/) by the Institute of Mathematical Statistics
(http://www.imstat.org
Recommended from our members
Generalized Stochastic Gradient Learning
We study the properties of generalized stochastic gradient (GSG) learning in forwardlooking models. We examine how the conditions for stability of standard stochastic gradient (SG) learning both di1er from and are related to E-stability, which governs stability under least squares learning. SG algorithms are sensitive to units of measurement and we show that there is a transformation of variables for which E-stability governs SG stability. GSG algorithms with constant gain have a deeper justification in terms of parameter drift, robustness and risk sensitivity
On the Brittleness of Bayesian Inference
With the advent of high-performance computing, Bayesian methods are
increasingly popular tools for the quantification of uncertainty throughout
science and industry. Since these methods impact the making of sometimes
critical decisions in increasingly complicated contexts, the sensitivity of
their posterior conclusions with respect to the underlying models and prior
beliefs is a pressing question for which there currently exist positive and
negative results. We report new results suggesting that, although Bayesian
methods are robust when the number of possible outcomes is finite or when only
a finite number of marginals of the data-generating distribution are unknown,
they could be generically brittle when applied to continuous systems (and their
discretizations) with finite information on the data-generating distribution.
If closeness is defined in terms of the total variation metric or the matching
of a finite system of generalized moments, then (1) two practitioners who use
arbitrarily close models and observe the same (possibly arbitrarily large
amount of) data may reach opposite conclusions; and (2) any given prior and
model can be slightly perturbed to achieve any desired posterior conclusions.
The mechanism causing brittlenss/robustness suggests that learning and
robustness are antagonistic requirements and raises the question of a missing
stability condition for using Bayesian Inference in a continuous world under
finite information.Comment: 20 pages, 2 figures. To appear in SIAM Review (Research Spotlights).
arXiv admin note: text overlap with arXiv:1304.677
A review of R-packages for random-intercept probit regression in small clusters
Generalized Linear Mixed Models (GLMMs) are widely used to model clustered categorical outcomes. To tackle the intractable integration over the random effects distributions, several approximation approaches have been developed for likelihood-based inference. As these seldom yield satisfactory results when analyzing binary outcomes from small clusters, estimation within the Structural Equation Modeling (SEM) framework is proposed as an alternative. We compare the performance of R-packages for random-intercept probit regression relying on: the Laplace approximation, adaptive Gaussian quadrature (AGQ), Penalized Quasi-Likelihood (PQL), an MCMC-implementation, and integrated nested Laplace approximation within the GLMM-framework, and a robust diagonally weighted least squares estimation within the SEM-framework. In terms of bias for the fixed and random effect estimators, SEM usually performs best for cluster size two, while AGQ prevails in terms of precision (mainly because of SEM's robust standard errors). As the cluster size increases, however, AGQ becomes the best choice for both bias and precision
Generalized Stochastic Gradient Learning
We study the properties of generalized stochastic gradient (GSG) learning in forward-looking models. We examine how the conditions for stability of standard stochastic gradient (SG) learning both differ from and are related to E-stability, which governs stability under least squares learning. SG algorithms are sensitive to units of measurement and we show that there is a transformation of variables for which E-stability governs SG stability. GSG algorithms with constant gain have a deeper justification in terms of parameter drift, robustness and risk sensitivity.
- …