19,799 research outputs found
Bayesian nonparametric multivariate convex regression
In many applications, such as economics, operations research and
reinforcement learning, one often needs to estimate a multivariate regression
function f subject to a convexity constraint. For example, in sequential
decision processes the value of a state under optimal subsequent decisions may
be known to be convex or concave. We propose a new Bayesian nonparametric
multivariate approach based on characterizing the unknown regression function
as the max of a random collection of unknown hyperplanes. This specification
induces a prior with large support in a Kullback-Leibler sense on the space of
convex functions, while also leading to strong posterior consistency. Although
we assume that f is defined over R^p, we show that this model has a convergence
rate of log(n)^{-1} n^{-1/(d+2)} under the empirical L2 norm when f actually
maps a d dimensional linear subspace to R. We design an efficient reversible
jump MCMC algorithm for posterior computation and demonstrate the methods
through application to value function approximation
On Stein's Identity and Near-Optimal Estimation in High-dimensional Index Models
We consider estimating the parametric components of semi-parametric multiple
index models in a high-dimensional and non-Gaussian setting. Such models form a
rich class of non-linear models with applications to signal processing, machine
learning and statistics. Our estimators leverage the score function based first
and second-order Stein's identities and do not require the covariates to
satisfy Gaussian or elliptical symmetry assumptions common in the literature.
Moreover, to handle score functions and responses that are heavy-tailed, our
estimators are constructed via carefully thresholding their empirical
counterparts. We show that our estimator achieves near-optimal statistical rate
of convergence in several settings. We supplement our theoretical results via
simulation experiments that confirm the theory
Challenges of Big Data Analysis
Big Data bring new opportunities to modern society and challenges to data
scientists. On one hand, Big Data hold great promises for discovering subtle
population patterns and heterogeneities that are not possible with small-scale
data. On the other hand, the massive sample size and high dimensionality of Big
Data introduce unique computational and statistical challenges, including
scalability and storage bottleneck, noise accumulation, spurious correlation,
incidental endogeneity, and measurement errors. These challenges are
distinguished and require new computational and statistical paradigm. This
article give overviews on the salient features of Big Data and how these
features impact on paradigm change on statistical and computational methods as
well as computing architectures. We also provide various new perspectives on
the Big Data analysis and computation. In particular, we emphasis on the
viability of the sparsest solution in high-confidence set and point out that
exogeneous assumptions in most statistical methods for Big Data can not be
validated due to incidental endogeneity. They can lead to wrong statistical
inferences and consequently wrong scientific conclusions
- …