2,395 research outputs found
On the Computational Complexity of MCMC-based Estimators in Large Samples
In this paper we examine the implications of the statistical large sample
theory for the computational complexity of Bayesian and quasi-Bayesian
estimation carried out using Metropolis random walks. Our analysis is motivated
by the Laplace-Bernstein-Von Mises central limit theorem, which states that in
large samples the posterior or quasi-posterior approaches a normal density.
Using the conditions required for the central limit theorem to hold, we
establish polynomial bounds on the computational complexity of general
Metropolis random walks methods in large samples. Our analysis covers cases
where the underlying log-likelihood or extremum criterion function is possibly
non-concave, discontinuous, and with increasing parameter dimension. However,
the central limit theorem restricts the deviations from continuity and
log-concavity of the log-likelihood or extremum criterion function in a very
specific manner.
Under minimal assumptions required for the central limit theorem to hold
under the increasing parameter dimension, we show that the Metropolis algorithm
is theoretically efficient even for the canonical Gaussian walk which is
studied in detail. Specifically, we show that the running time of the algorithm
in large samples is bounded in probability by a polynomial in the parameter
dimension , and, in particular, is of stochastic order in the leading
cases after the burn-in period. We then give applications to exponential
families, curved exponential families, and Z-estimation of increasing
dimension.Comment: 36 pages, 2 figure
Least squares after model selection in high-dimensional sparse models
In this article we study post-model selection estimators that apply ordinary
least squares (OLS) to the model selected by first-step penalized estimators,
typically Lasso. It is well known that Lasso can estimate the nonparametric
regression function at nearly the oracle rate, and is thus hard to improve
upon. We show that the OLS post-Lasso estimator performs at least as well as
Lasso in terms of the rate of convergence, and has the advantage of a smaller
bias. Remarkably, this performance occurs even if the Lasso-based model
selection "fails" in the sense of missing some components of the "true"
regression model. By the "true" model, we mean the best s-dimensional
approximation to the nonparametric regression function chosen by the oracle.
Furthermore, OLS post-Lasso estimator can perform strictly better than Lasso,
in the sense of a strictly faster rate of convergence, if the Lasso-based model
selection correctly includes all components of the "true" model as a subset and
also achieves sufficient sparsity. In the extreme case, when Lasso perfectly
selects the "true" model, the OLS post-Lasso estimator becomes the oracle
estimator. An important ingredient in our analysis is a new sparsity bound on
the dimension of the model selected by Lasso, which guarantees that this
dimension is at most of the same order as the dimension of the "true" model.
Our rate results are nonasymptotic and hold in both parametric and
nonparametric models. Moreover, our analysis is not limited to the Lasso
estimator acting as a selector in the first step, but also applies to any other
estimator, for example, various forms of thresholded Lasso, with good rates and
good sparsity properties. Our analysis covers both traditional thresholding and
a new practical, data-driven thresholding scheme that induces additional
sparsity subject to maintaining a certain goodness of fit. The latter scheme
has theoretical guarantees similar to those of Lasso or OLS post-Lasso, but it
dominates those procedures as well as traditional thresholding in a wide
variety of experiments.Comment: Published in at http://dx.doi.org/10.3150/11-BEJ410 the Bernoulli
(http://isi.cbs.nl/bernoulli/) by the International Statistical
Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm
An -Regularization Approach to High-Dimensional Errors-in-variables Models
Several new estimation methods have been recently proposed for the linear
regression model with observation error in the design. Different assumptions on
the data generating process have motivated different estimators and analysis.
In particular, the literature considered (1) observation errors in the design
uniformly bounded by some , and (2) zero mean independent
observation errors. Under the first assumption, the rates of convergence of the
proposed estimators depend explicitly on , while the second
assumption has been applied when an estimator for the second moment of the
observational error is available. This work proposes and studies two new
estimators which, compared to other procedures for regression models with
errors in the design, exploit an additional -norm regularization.
The first estimator is applicable when both (1) and (2) hold but does not
require an estimator for the second moment of the observational error. The
second estimator is applicable under (2) and requires an estimator for the
second moment of the observation error. Importantly, we impose no assumption on
the accuracy of this pilot estimator, in contrast to the previously known
procedures. As the recent proposals, we allow the number of covariates to be
much larger than the sample size. We establish the rates of convergence of the
estimators and compare them with the bounds obtained for related estimators in
the literature. These comparisons show interesting insights on the interplay of
the assumptions and the achievable rates of convergence
On multivariate quantiles under partial orders
This paper focuses on generalizing quantiles from the ordering point of view.
We propose the concept of partial quantiles, which are based on a given partial
order. We establish that partial quantiles are equivariant under
order-preserving transformations of the data, robust to outliers, characterize
the probability distribution if the partial order is sufficiently rich,
generalize the concept of efficient frontier, and can measure dispersion from
the partial order perspective. We also study several statistical aspects of
partial quantiles. We provide estimators, associated rates of convergence, and
asymptotic distributions that hold uniformly over a continuum of quantile
indices. Furthermore, we provide procedures that can restore monotonicity
properties that might have been disturbed by estimation error, establish
computational complexity bounds, and point out a concentration of measure
phenomenon (the latter under independence and the componentwise natural order).
Finally, we illustrate the concepts by discussing several theoretical examples
and simulations. Empirical applications to compare intake nutrients within
diets, to evaluate the performance of investment funds, and to study the impact
of policies on tobacco awareness are also presented to illustrate the concepts
and their use.Comment: Published in at http://dx.doi.org/10.1214/10-AOS863 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
On the computational complexity of MCMC-based estimators in large samples
In this paper we examine the implications of the statistical large sample theory for the computational complexity of Bayesian and quasi-Bayesian estimation carried out using Metropolis random walks. Our analysis is motivated by the Laplace-Bernstein-Von Mises central limit theorem, which states that in large samples the posterior or quasi-posterior approaches a normal density. Using this observation, we establish polynomial bounds on the computational complexity of general Metropolis random walks methods in large samples. Our analysis covers cases, where the underlying log-likelihood or extremum criterion function is possibly nonconcave, discontinuous, and of increasing dimension. However, the central limit theorem restricts the deviations from continuity and log-concavity of the log-likelihood or extremum criterion function in a very specific manner.
L1-Penalized quantile regression in high-dimensional sparse models
We consider median regression and, more generally, quantile regression in high-dimensional sparse models. In these models the overall number of regressors p is very large, possibly larger than the sample size n, but only s of these regressors have non-zero impact on the conditional quantile of the response variable, where s grows slower than n. Since in this case the ordinary quantile regression is not consistent, we consider quantile regression penalized by the L1-norm of coefficients (L1-QR). First, we show that L1-QR is consistent at the rate of the square root of (s/n) log p, which is close to the oracle rate of the square root of (s/n), achievable when the minimal true model is known. The overall number of regressors p affects the rate only through the log p factor, thus allowing nearly exponential growth in the number of zero-impact regressors. The rate result holds under relatively weak conditions, requiring that s/n converges to zero at a super-logarithmic speed and that regularization parameter satisfies certain theoretical constraints. Second, we propose a pivotal, data-driven choice of the regularization parameter and show that it satisfies these theoretical constraints. Third, we show that L1-QR correctly selects the true minimal model as a valid submodel, when the non-zero coefficients of the true model are well separated from zero. We also show that the number of non-zero coefficients in L1-QR is of same stochastic order as s, the number of non-zero coefficients in the minimal true model. Fourth, we analyze the rate of convergence of a two-step estimator that applies ordinary quantile regression to the selected model. Fifth, we evaluate the performance of L1-QR in a Monte-Carlo experiment, and provide an application to the analysis of the international economic growth.
Post-l1-penalized estimators in high-dimensional linear regression models
In this paper we study post-penalized estimators which apply ordinary, unpenalized linear regression to the model selected by first-step penalized estimators, typically LASSO. It is well known that LASSO can estimate the regression function at nearly the oracle rate, and is thus hard to improve upon. We show that post-LASSO performs at least as well as LASSO in terms of the rate of convergence, and has the advantage of a smaller bias. Remarkably, this performance occurs even if the LASSO-based model selection 'fails' in the sense of missing some components of the 'true' regression model. By the 'true' model we mean here the best s-dimensional approximation to the regression function chosen by the oracle. Furthermore, post-LASSO can perform strictly better than LASSO, in the sense of a strictly faster rate of convergence, if the LASSO-based model selection correctly includes all components of the 'true' model as a subset and also achieves a sufficient sparsity. In the extreme case, when LASSO perfectly selects the 'true' model, the post-LASSO estimator becomes the oracle estimator. An important ingredient in our analysis is a new sparsity bound on the dimension of the model selected by LASSO which guarantees that this dimension is at most of the same order as the dimension of the 'true' model. Our rate results are non-asymptotic and hold in both parametric and nonparametric models. Moreover, our analysis is not limited to the LASSO estimator in the first step, but also applies to other estimators, for example, the trimmed LASSO, Dantzig selector, or any other estimator with good rates and good sparsity. Our analysis covers both traditional trimming and a new practical, completely data-driven trimming scheme that induces maximal sparsity subject to maintaining a certain goodness-of-fit. The latter scheme has theoretical guarantees similar to those of LASSO or post-LASSO, but it dominates these procedures as well as traditional trimming in a wide variety of experiments.
Inference for High-Dimensional Sparse Econometric Models
This article is about estimation and inference methods for high dimensional
sparse (HDS) regression models in econometrics. High dimensional sparse models
arise in situations where many regressors (or series terms) are available and
the regression function is well-approximated by a parsimonious, yet unknown set
of regressors. The latter condition makes it possible to estimate the entire
regression function effectively by searching for approximately the right set of
regressors. We discuss methods for identifying this set of regressors and
estimating their coefficients based on -penalization and describe key
theoretical results. In order to capture realistic practical situations, we
expressly allow for imperfect selection of regressors and study the impact of
this imperfect selection on estimation and inference results. We focus the main
part of the article on the use of HDS models and methods in the instrumental
variables model and the partially linear model. We present a set of novel
inference results for these models and illustrate their use with applications
to returns to schooling and growth regression
- ā¦