10,870 research outputs found
Worst-Case Linear Discriminant Analysis as Scalable Semidefinite Feasibility Problems
In this paper, we propose an efficient semidefinite programming (SDP)
approach to worst-case linear discriminant analysis (WLDA). Compared with the
traditional LDA, WLDA considers the dimensionality reduction problem from the
worst-case viewpoint, which is in general more robust for classification.
However, the original problem of WLDA is non-convex and difficult to optimize.
In this paper, we reformulate the optimization problem of WLDA into a sequence
of semidefinite feasibility problems. To efficiently solve the semidefinite
feasibility problems, we design a new scalable optimization method with
quasi-Newton methods and eigen-decomposition being the core components. The
proposed method is orders of magnitude faster than standard interior-point
based SDP solvers.
Experiments on a variety of classification problems demonstrate that our
approach achieves better performance than standard LDA. Our method is also much
faster and more scalable than standard interior-point SDP solvers based WLDA.
The computational complexity for an SDP with constraints and matrices of
size by is roughly reduced from to
( in our case).Comment: 14 page
Final-State Constrained Optimal Control via a Projection Operator Approach
In this paper we develop a numerical method to solve nonlinear optimal
control problems with final-state constraints. Specifically, we extend the
PRojection Operator based Netwon's method for Trajectory Optimization (PRONTO),
which was proposed by Hauser for unconstrained optimal control problems. While
in the standard method final-state constraints can be only approximately
handled by means of a terminal penalty, in this work we propose a methodology
to meet the constraints exactly. Moreover, our method guarantees recursive
feasibility of the final-state constraint. This is an appealing property
especially in realtime applications in which one would like to be able to stop
the computation even if the desired tolerance has not been reached, but still
satisfy the constraints. Following the same conceptual idea of PRONTO, the
proposed strategy is based on two main steps which (differently from the
standard scheme) preserve the feasibility of the final-state constraints: (i)
solve a quadratic approximation of the nonlinear problem to find a descent
direction, and (ii) get a (feasible) trajectory by means of a feedback law
(which turns out to be a nonlinear projection operator). To find the (feasible)
descent direction we take advantage of final-state constrained Linear Quadratic
optimal control methods, while the second step is performed by suitably
designing a constrained version of the trajectory tracking projection operator.
The effectiveness of the proposed strategy is tested on the optimal state
transfer of an inverted pendulum
Bounded perturbation resilience of projected scaled gradient methods
We investigate projected scaled gradient (PSG) methods for convex
minimization problems. These methods perform a descent step along a diagonally
scaled gradient direction followed by a feasibility regaining step via
orthogonal projection onto the constraint set. This constitutes a generalized
algorithmic structure that encompasses as special cases the gradient projection
method, the projected Newton method, the projected Landweber-type methods and
the generalized Expectation-Maximization (EM)-type methods. We prove the
convergence of the PSG methods in the presence of bounded perturbations. This
resilience to bounded perturbations is relevant to the ability to apply the
recently developed superiorization methodology to PSG methods, in particular to
the EM algorithm.Comment: Computational Optimization and Applications, accepted for publicatio
An Efficient Dual Approach to Distance Metric Learning
Distance metric learning is of fundamental interest in machine learning
because the distance metric employed can significantly affect the performance
of many learning methods. Quadratic Mahalanobis metric learning is a popular
approach to the problem, but typically requires solving a semidefinite
programming (SDP) problem, which is computationally expensive. Standard
interior-point SDP solvers typically have a complexity of (with
the dimension of input data), and can thus only practically solve problems
exhibiting less than a few thousand variables. Since the number of variables is
, this implies a limit upon the size of problem that can
practically be solved of around a few hundred dimensions. The complexity of the
popular quadratic Mahalanobis metric learning approach thus limits the size of
problem to which metric learning can be applied. Here we propose a
significantly more efficient approach to the metric learning problem based on
the Lagrange dual formulation of the problem. The proposed formulation is much
simpler to implement, and therefore allows much larger Mahalanobis metric
learning problems to be solved. The time complexity of the proposed method is
, which is significantly lower than that of the SDP approach.
Experiments on a variety of datasets demonstrate that the proposed method
achieves an accuracy comparable to the state-of-the-art, but is applicable to
significantly larger problems. We also show that the proposed method can be
applied to solve more general Frobenius-norm regularized SDP problems
approximately
A Nonconvex Projection Method for Robust PCA
Robust principal component analysis (RPCA) is a well-studied problem with the
goal of decomposing a matrix into the sum of low-rank and sparse components. In
this paper, we propose a nonconvex feasibility reformulation of RPCA problem
and apply an alternating projection method to solve it. To the best of our
knowledge, we are the first to propose a method that solves RPCA problem
without considering any objective function, convex relaxation, or surrogate
convex constraints. We demonstrate through extensive numerical experiments on a
variety of applications, including shadow removal, background estimation, face
detection, and galaxy evolution, that our approach matches and often
significantly outperforms current state-of-the-art in various ways.Comment: In the proceedings of Thirty-Third AAAI Conference on Artificial
Intelligence (AAAI-19
- …