2,744 research outputs found
Matrix Completion via Max-Norm Constrained Optimization
Matrix completion has been well studied under the uniform sampling model and
the trace-norm regularized methods perform well both theoretically and
numerically in such a setting. However, the uniform sampling model is
unrealistic for a range of applications and the standard trace-norm relaxation
can behave very poorly when the underlying sampling scheme is non-uniform.
In this paper we propose and analyze a max-norm constrained empirical risk
minimization method for noisy matrix completion under a general sampling model.
The optimal rate of convergence is established under the Frobenius norm loss in
the context of approximately low-rank matrix reconstruction. It is shown that
the max-norm constrained method is minimax rate-optimal and yields a unified
and robust approximate recovery guarantee, with respect to the sampling
distributions. The computational effectiveness of this method is also
discussed, based on first-order algorithms for solving convex optimizations
involving max-norm regularization.Comment: 33 page
Recovering Multiple Nonnegative Time Series From a Few Temporal Aggregates
Motivated by electricity consumption metering, we extend existing nonnegative
matrix factorization (NMF) algorithms to use linear measurements as
observations, instead of matrix entries. The objective is to estimate multiple
time series at a fine temporal scale from temporal aggregates measured on each
individual series. Furthermore, our algorithm is extended to take into account
individual autocorrelation to provide better estimation, using a recent convex
relaxation of quadratically constrained quadratic program. Extensive
experiments on synthetic and real-world electricity consumption datasets
illustrate the effectiveness of our matrix recovery algorithms
Structured penalties for functional linear models---partially empirical eigenvectors for regression
One of the challenges with functional data is incorporating spatial
structure, or local correlation, into the analysis. This structure is inherent
in the output from an increasing number of biomedical technologies, and a
functional linear model is often used to estimate the relationship between the
predictor functions and scalar responses. Common approaches to the ill-posed
problem of estimating a coefficient function typically involve two stages:
regularization and estimation. Regularization is usually done via dimension
reduction, projecting onto a predefined span of basis functions or a reduced
set of eigenvectors (principal components). In contrast, we present a unified
approach that directly incorporates spatial structure into the estimation
process by exploiting the joint eigenproperties of the predictors and a linear
penalty operator. In this sense, the components in the regression are
`partially empirical' and the framework is provided by the generalized singular
value decomposition (GSVD). The GSVD clarifies the penalized estimation process
and informs the choice of penalty by making explicit the joint influence of the
penalty and predictors on the bias, variance, and performance of the estimated
coefficient function. Laboratory spectroscopy data and simulations are used to
illustrate the concepts.Comment: 29 pages, 3 figures, 5 tables; typo/notational errors edited and
intro revised per journal review proces
Fixed effects selection in the linear mixed-effects model using adaptive ridge procedure for L0 penalty performance
This paper is concerned with the selection of fixed effects along with the
estimation of fixed effects, random effects and variance components in the
linear mixed-effects model. We introduce a selection procedure based on an
adaptive ridge (AR) penalty of the profiled likelihood, where the covariance
matrix of the random effects is Cholesky factorized. This selection procedure
is intended to both low and high-dimensional settings where the number of fixed
effects is allowed to grow exponentially with the total sample size, yielding
technical difficulties due to the non-convex optimization problem induced by L0
penalties. Through extensive simulation studies, the procedure is compared to
the LASSO selection and appears to enjoy the model selection consistency as
well as the estimation consistency
- …