8,236 research outputs found
Statistical properties of the method of regularization with periodic Gaussian reproducing kernel
The method of regularization with the Gaussian reproducing kernel is popular
in the machine learning literature and successful in many practical
applications.
In this paper we consider the periodic version of the Gaussian kernel
regularization.
We show in the white noise model setting, that in function spaces of very
smooth functions, such as the infinite-order Sobolev space and the space of
analytic functions, the method under consideration is asymptotically minimax;
in finite-order Sobolev spaces, the method is rate optimal, and the efficiency
in terms of constant when compared with the minimax estimator is reasonably
high. The smoothing parameters in the periodic Gaussian regularization can be
chosen adaptively without loss of asymptotic efficiency. The results derived in
this paper give a partial explanation of the success of the
Gaussian reproducing kernel in practice. Simulations are carried out to study
the finite sample properties of the periodic Gaussian regularization.Comment: Published by the Institute of Mathematical Statistics
(http://www.imstat.org) in the Annals of Statistics
(http://www.imstat.org/aos/) at http://dx.doi.org/10.1214/00905360400000045
Recommended from our members
Nonparametric regression analysis
textNonparametric regression uses nonparametric and flexible methods in analyzing complex data with unknown regression relationships by imposing minimum assumptions on the regression function. The theory and applications of nonparametric regression methods with an emphasis on kernel regression, smoothing spines and Gaussian process regression are reviewed in this report. Two datasets are analyzed to demonstrate and compare the three nonparametric regression models in R.Statistic
Nonparametric likelihood based estimation of linear filters for point processes
We consider models for multivariate point processes where the intensity is
given nonparametrically in terms of functions in a reproducing kernel Hilbert
space. The likelihood function involves a time integral and is consequently not
given in terms of a finite number of kernel evaluations. The main result is a
representation of the gradient of the log-likelihood, which we use to derive
computable approximations of the log-likelihood and the gradient by time
discretization. These approximations are then used to minimize the approximate
penalized log-likelihood. For time and memory efficiency the implementation
relies crucially on the use of sparse matrices. As an illustration we consider
neuron network modeling, and we use this example to investigate how the
computational costs of the approximations depend on the resolution of the time
discretization. The implementation is available in the R package ppstat.Comment: 10 pages, 3 figure
- …