8,654 research outputs found
Asymptotic equivalence for regression under fractional noise
Consider estimation of the regression function based on a model with
equidistant design and measurement errors generated from a fractional Gaussian
noise process. In previous literature, this model has been heuristically linked
to an experiment, where the anti-derivative of the regression function is
continuously observed under additive perturbation by a fractional Brownian
motion. Based on a reformulation of the problem using reproducing kernel
Hilbert spaces, we derive abstract approximation conditions on function spaces
under which asymptotic equivalence between these models can be established and
show that the conditions are satisfied for certain Sobolev balls exceeding some
minimal smoothness. Furthermore, we construct a sequence space representation
and provide necessary conditions for asymptotic equivalence to hold.Comment: Published in at http://dx.doi.org/10.1214/14-AOS1262 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Nonparametric regression using deep neural networks with ReLU activation function
Consider the multivariate nonparametric regression model. It is shown that
estimators based on sparsely connected deep neural networks with ReLU
activation function and properly chosen network architecture achieve the
minimax rates of convergence (up to -factors) under a general
composition assumption on the regression function. The framework includes many
well-studied structural constraints such as (generalized) additive models.
While there is a lot of flexibility in the network architecture, the tuning
parameter is the sparsity of the network. Specifically, we consider large
networks with number of potential network parameters exceeding the sample size.
The analysis gives some insights into why multilayer feedforward neural
networks perform well in practice. Interestingly, for ReLU activation function
the depth (number of layers) of the neural network architectures plays an
important role and our theory suggests that for nonparametric regression,
scaling the network depth with the sample size is natural. It is also shown
that under the composition assumption wavelet estimators can only achieve
suboptimal rates.Comment: article, rejoinder and supplementary materia
Rigid G2-Representations and motives of Type G2
We prove an effective Hilbert Irreducibility result for residual realizations
of a family of motives with motivic Galois group G2
Posterior contraction rates for support boundary recovery
Given a sample of a Poisson point process with intensity we study recovery of the boundary function from a
nonparametric Bayes perspective. Because of the irregularity of this model, the
analysis is non-standard. We establish a general result for the posterior
contraction rate with respect to the -norm based on entropy and one-sided
small probability bounds. From this, specific posterior contraction results are
derived for Gaussian process priors and priors based on random wavelet series
- …