560 research outputs found
Optimal estimation of the mean function based on discretely sampled functional data: Phase transition
The problem of estimating the mean of random functions based on discretely
sampled data arises naturally in functional data analysis. In this paper, we
study optimal estimation of the mean function under both common and independent
designs. Minimax rates of convergence are established and easily implementable
rate-optimal estimators are introduced. The analysis reveals interesting and
different phase transition phenomena in the two cases. Under the common design,
the sampling frequency solely determines the optimal rate of convergence when
it is relatively small and the sampling frequency has no effect on the optimal
rate when it is large. On the other hand, under the independent design, the
optimal rate of convergence is determined jointly by the sampling frequency and
the number of curves when the sampling frequency is relatively small. When it
is large, the sampling frequency has no effect on the optimal rate. Another
interesting contrast between the two settings is that smoothing is necessary
under the independent design, while, somewhat surprisingly, it is not essential
under the common design.Comment: Published in at http://dx.doi.org/10.1214/11-AOS898 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Adaptive covariance matrix estimation through block thresholding
Estimation of large covariance matrices has drawn considerable recent
attention, and the theoretical focus so far has mainly been on developing a
minimax theory over a fixed parameter space. In this paper, we consider
adaptive covariance matrix estimation where the goal is to construct a single
procedure which is minimax rate optimal simultaneously over each parameter
space in a large collection. A fully data-driven block thresholding estimator
is proposed. The estimator is constructed by carefully dividing the sample
covariance matrix into blocks and then simultaneously estimating the entries in
a block by thresholding. The estimator is shown to be optimally rate adaptive
over a wide range of bandable covariance matrices. A simulation study is
carried out and shows that the block thresholding estimator performs well
numerically. Some of the technical tools developed in this paper can also be of
independent interest.Comment: Published in at http://dx.doi.org/10.1214/12-AOS999 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Discussion: "A significance test for the lasso"
Discussion of "A significance test for the lasso" by Richard Lockhart,
Jonathan Taylor, Ryan J. Tibshirani, Robert Tibshirani [arXiv:1301.7161].Comment: Published in at http://dx.doi.org/10.1214/13-AOS1175B the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
A Constrained L1 Minimization Approach to Sparse Precision Matrix Estimation
A constrained L1 minimization method is proposed for estimating a sparse
inverse covariance matrix based on a sample of iid -variate random
variables. The resulting estimator is shown to enjoy a number of desirable
properties. In particular, it is shown that the rate of convergence between the
estimator and the true -sparse precision matrix under the spectral norm is
when the population distribution has either exponential-type
tails or polynomial-type tails. Convergence rates under the elementwise
norm and Frobenius norm are also presented. In addition, graphical
model selection is considered. The procedure is easily implementable by linear
programming. Numerical performance of the estimator is investigated using both
simulated and real data. In particular, the procedure is applied to analyze a
breast cancer dataset. The procedure performs favorably in comparison to
existing methods.Comment: To appear in Journal of the American Statistical Associatio
Minimax and Adaptive Prediction for Functional Linear Regression
This article considers minimax and adaptive prediction with functional predictors in the framework of functional linear model and reproducing kernel Hilbert space. Minimax rate of convergence for the excess prediction risk is established. It is shown that the optimal rate is determined jointly by the reproducing kernel and the covariance kernel. In particular, the alignment of these two kernels can significantly affect the difficulty of the prediction problem. In contrast, the existing literature has so far focused only on the setting where the two kernels are nearly perfectly aligned. This motivates us to propose an easily implementable data-driven roughness regularization predictor that is shown to attain the optimal rate of convergence adaptively without the need of knowing the covariance kernel. Simulation studies are carried out to illustrate the merits of the adaptive predictor and to demonstrate the theoretical results
A Reproducing Kernel Hilbert Space Approach to Functional Linear Regression
We study in this paper a smoothness regularization method for functional linear regression and provide a unified treatment for both the prediction and estimation problems. By developing a tool on simultaneous diagonalization of two positive definite kernels, we obtain shaper results on the minimax rates of convergence and show that smoothness regularized estimators achieve the optimal rates of convergence for both prediction and estimation under conditions weaker than those for the functional principal components based methods developed in the literature. Despite the generality of the method of regularization, we show that the procedure is easily implementable. Numerical results are obtained to illustrate the merits of the method and to demonstrate the theoretical developments
Minimax and Adaptive Prediction for Functional Linear Regression
This article considers minimax and adaptive prediction with functional predictors in the framework of functional linear model and reproducing kernel Hilbert space. Minimax rate of convergence for the excess prediction risk is established. It is shown that the optimal rate is determined jointly by the reproducing kernel and the covariance kernel. In particular, the alignment of these two kernels can significantly affect the difficulty of the prediction problem. In contrast, the existing literature has so far focused only on the setting where the two kernels are nearly perfectly aligned. This motivates us to propose an easily implementable data-driven roughness regularization predictor that is shown to attain the optimal rate of convergence adaptively without the need of knowing the covariance kernel. Simulation studies are carried out to illustrate the merits of the adaptive predictor and to demonstrate the theoretical results
- …