225 research outputs found
Conic Optimization Theory: Convexification Techniques and Numerical Algorithms
Optimization is at the core of control theory and appears in several areas of
this field, such as optimal control, distributed control, system
identification, robust control, state estimation, model predictive control and
dynamic programming. The recent advances in various topics of modern
optimization have also been revamping the area of machine learning. Motivated
by the crucial role of optimization theory in the design, analysis, control and
operation of real-world systems, this tutorial paper offers a detailed overview
of some major advances in this area, namely conic optimization and its emerging
applications. First, we discuss the importance of conic optimization in
different areas. Then, we explain seminal results on the design of hierarchies
of convex relaxations for a wide range of nonconvex problems. Finally, we study
different numerical algorithms for large-scale conic optimization problems.Comment: 18 page
Privacy-Preserving Distributed Optimization via Subspace Perturbation: A General Framework
As the modern world becomes increasingly digitized and interconnected,
distributed signal processing has proven to be effective in processing its
large volume of data. However, a main challenge limiting the broad use of
distributed signal processing techniques is the issue of privacy in handling
sensitive data. To address this privacy issue, we propose a novel yet general
subspace perturbation method for privacy-preserving distributed optimization,
which allows each node to obtain the desired solution while protecting its
private data. In particular, we show that the dual variables introduced in each
distributed optimizer will not converge in a certain subspace determined by the
graph topology. Additionally, the optimization variable is ensured to converge
to the desired solution, because it is orthogonal to this non-convergent
subspace. We therefore propose to insert noise in the non-convergent subspace
through the dual variable such that the private data are protected, and the
accuracy of the desired solution is completely unaffected. Moreover, the
proposed method is shown to be secure under two widely-used adversary models:
passive and eavesdropping. Furthermore, we consider several distributed
optimizers such as ADMM and PDMM to demonstrate the general applicability of
the proposed method. Finally, we test the performance through a set of
applications. Numerical tests indicate that the proposed method is superior to
existing methods in terms of several parameters like estimated accuracy,
privacy level, communication cost and convergence rate
Does the -norm Learn a Sparse Graph under Laplacian Constrained Graphical Models?
We consider the problem of learning a sparse graph under Laplacian
constrained Gaussian graphical models. This problem can be formulated as a
penalized maximum likelihood estimation of the precision matrix under Laplacian
structural constraints. Like in the classical graphical lasso problem, recent
works made use of the -norm regularization with the goal of promoting
sparsity in Laplacian structural precision matrix estimation. However, we find
that the widely used -norm is not effective in imposing a sparse
solution in this problem. Through empirical evidence, we observe that the
number of nonzero graph weights grows with the increase of the regularization
parameter. From a theoretical perspective, we prove that a large regularization
parameter will surprisingly lead to a fully connected graph. To address this
issue, we propose a nonconvex estimation method by solving a sequence of
weighted -norm penalized sub-problems and prove that the statistical
error of the proposed estimator matches the minimax lower bound. To solve each
sub-problem, we develop a projected gradient descent algorithm that enjoys a
linear convergence rate. Numerical experiments involving synthetic and
real-world data sets from the recent COVID-19 pandemic and financial stock
markets demonstrate the effectiveness of the proposed method. An open source
package containing the code for all the experiments is available
at https://github.com/mirca/sparseGraph
- …