890 research outputs found
GMRES-Accelerated ADMM for Quadratic Objectives
We consider the sequence acceleration problem for the alternating direction
method-of-multipliers (ADMM) applied to a class of equality-constrained
problems with strongly convex quadratic objectives, which frequently arise as
the Newton subproblem of interior-point methods. Within this context, the ADMM
update equations are linear, the iterates are confined within a Krylov
subspace, and the General Minimum RESidual (GMRES) algorithm is optimal in its
ability to accelerate convergence. The basic ADMM method solves a
-conditioned problem in iterations. We give
theoretical justification and numerical evidence that the GMRES-accelerated
variant consistently solves the same problem in iterations
for an order-of-magnitude reduction in iterations, despite a worst-case bound
of iterations. The method is shown to be competitive against
standard preconditioned Krylov subspace methods for saddle-point problems. The
method is embedded within SeDuMi, a popular open-source solver for conic
optimization written in MATLAB, and used to solve many large-scale semidefinite
programs with error that decreases like , instead of ,
where is the iteration index.Comment: 31 pages, 7 figures. Accepted for publication in SIAM Journal on
Optimization (SIOPT
Online Matrix Completion Through Nuclear Norm Regularisation
It is the main goal of this paper to propose a novel method to perform matrix
completion on-line. Motivated by a wide variety of applications, ranging from
the design of recommender systems to sensor network localization through
seismic data reconstruction, we consider the matrix completion problem when
entries of the matrix of interest are observed gradually. Precisely, we place
ourselves in the situation where the predictive rule should be refined
incrementally, rather than recomputed from scratch each time the sample of
observed entries increases. The extension of existing matrix completion methods
to the sequential prediction context is indeed a major issue in the Big Data
era, and yet little addressed in the literature. The algorithm promoted in this
article builds upon the Soft Impute approach introduced in Mazumder et al.
(2010). The major novelty essentially arises from the use of a randomised
technique for both computing and updating the Singular Value Decomposition
(SVD) involved in the algorithm. Though of disarming simplicity, the method
proposed turns out to be very efficient, while requiring reduced computations.
Several numerical experiments based on real datasets illustrating its
performance are displayed, together with preliminary results giving it a
theoretical basis.Comment: Corrected a typo in the affiliatio
Covariance Estimation in High Dimensions via Kronecker Product Expansions
This paper presents a new method for estimating high dimensional covariance
matrices. The method, permuted rank-penalized least-squares (PRLS), is based on
a Kronecker product series expansion of the true covariance matrix. Assuming an
i.i.d. Gaussian random sample, we establish high dimensional rates of
convergence to the true covariance as both the number of samples and the number
of variables go to infinity. For covariance matrices of low separation rank,
our results establish that PRLS has significantly faster convergence than the
standard sample covariance matrix (SCM) estimator. The convergence rate
captures a fundamental tradeoff between estimation error and approximation
error, thus providing a scalable covariance estimation framework in terms of
separation rank, similar to low rank approximation of covariance matrices. The
MSE convergence rates generalize the high dimensional rates recently obtained
for the ML Flip-flop algorithm for Kronecker product covariance estimation. We
show that a class of block Toeplitz covariance matrices is approximatable by
low separation rank and give bounds on the minimal separation rank that
ensures a given level of bias. Simulations are presented to validate the
theoretical bounds. As a real world application, we illustrate the utility of
the proposed Kronecker covariance estimator for spatio-temporal linear least
squares prediction of multivariate wind speed measurements.Comment: 47 pages, accepted to IEEE Transactions on Signal Processin
Conic Optimization Theory: Convexification Techniques and Numerical Algorithms
Optimization is at the core of control theory and appears in several areas of
this field, such as optimal control, distributed control, system
identification, robust control, state estimation, model predictive control and
dynamic programming. The recent advances in various topics of modern
optimization have also been revamping the area of machine learning. Motivated
by the crucial role of optimization theory in the design, analysis, control and
operation of real-world systems, this tutorial paper offers a detailed overview
of some major advances in this area, namely conic optimization and its emerging
applications. First, we discuss the importance of conic optimization in
different areas. Then, we explain seminal results on the design of hierarchies
of convex relaxations for a wide range of nonconvex problems. Finally, we study
different numerical algorithms for large-scale conic optimization problems.Comment: 18 page
- …