167 research outputs found
A squared smoothing Newton method for nonsmooth matrix equations and its applications in semidefinite optimization problems
10.1137/S1052623400379620SIAM Journal on Optimization143783-80
A squared smoothing Newton method for semidefinite programming
This paper proposes a squared smoothing Newton method via the Huber smoothing
function for solving semidefinite programming problems (SDPs). We first study
the fundamental properties of the matrix-valued mapping defined upon the Huber
function. Using these results and existing ones in the literature, we then
conduct rigorous convergence analysis and establish convergence properties for
the proposed algorithm. In particular, we show that the proposed method is
well-defined and admits global convergence. Moreover, under suitable regularity
conditions, i.e., the primal and dual constraint nondegenerate conditions, the
proposed method is shown to have a superlinear convergence rate. To evaluate
the practical performance of the algorithm, we conduct extensive numerical
experiments for solving various classes of SDPs. Comparison with the
state-of-the-art SDP solver {\tt {\tt SDPNAL+}} demonstrates that our method is
also efficient for computing accurate solutions of SDPs.Comment: 44 page
Projection methods in conic optimization
There exist efficient algorithms to project a point onto the intersection of
a convex cone and an affine subspace. Those conic projections are in turn the
work-horse of a range of algorithms in conic optimization, having a variety of
applications in science, finance and engineering. This chapter reviews some of
these algorithms, emphasizing the so-called regularization algorithms for
linear conic optimization, and applications in polynomial optimization. This is
a presentation of the material of several recent research articles; we aim here
at clarifying the ideas, presenting them in a general framework, and pointing
out important techniques
Optimization and Applications
Proceedings of a workshop devoted to optimization problems, their theory and resolution, and above all applications of them. The topics covered existence and stability of solutions; design, analysis, development and implementation of algorithms; applications in mechanics, telecommunications, medicine, operations research
Linear system identification using stable spline kernels and PLQ penalties
The classical approach to linear system identification is given by parametric
Prediction Error Methods (PEM). In this context, model complexity is often
unknown so that a model order selection step is needed to suitably trade-off
bias and variance. Recently, a different approach to linear system
identification has been introduced, where model order determination is avoided
by using a regularized least squares framework. In particular, the penalty term
on the impulse response is defined by so called stable spline kernels. They
embed information on regularity and BIBO stability, and depend on a small
number of parameters which can be estimated from data. In this paper, we
provide new nonsmooth formulations of the stable spline estimator. In
particular, we consider linear system identification problems in a very broad
context, where regularization functionals and data misfits can come from a rich
set of piecewise linear quadratic functions. Moreover, our anal- ysis includes
polyhedral inequality constraints on the unknown impulse response. For any
formulation in this class, we show that interior point methods can be used to
solve the system identification problem, with complexity O(n3)+O(mn2) in each
iteration, where n and m are the number of impulse response coefficients and
measurements, respectively. The usefulness of the framework is illustrated via
a numerical experiment where output measurements are contaminated by outliers.Comment: 8 pages, 2 figure
A Sparse Smoothing Newton Method for Solving Discrete Optimal Transport Problems
The discrete optimal transport (OT) problem, which offers an effective
computational tool for comparing two discrete probability distributions, has
recently attracted much attention and played essential roles in many modern
applications. This paper proposes to solve the discrete OT problem by applying
a squared smoothing Newton method via the Huber smoothing function for solving
the corresponding KKT system directly. The proposed algorithm admits appealing
convergence properties and is able to take advantage of the solution sparsity
to greatly reduce computational costs. Moreover, the algorithm can be extended
to solve problems with similar structures including the Wasserstein barycenter
(WB) problem with fixed supports. To verify the practical performance of the
proposed method, we conduct extensive numerical experiments to solve a large
set of discrete OT and WB benchmark problems. Our numerical results show that
the proposed method is efficient compared to state-of-the-art linear
programming (LP) solvers. Moreover, the proposed method consumes less memory
than existing LP solvers, which demonstrates the potential usage of our
algorithm for solving large-scale OT and WB problems.Comment: 29 pages, 17 figure
- …