1,414 research outputs found
A hybrid constraint programming and semidefinite programming approach for the stable set problem
This work presents a hybrid approach to solve the maximum stable set problem,
using constraint and semidefinite programming. The approach consists of two
steps: subproblem generation and subproblem solution. First we rank the
variable domain values, based on the solution of a semidefinite relaxation.
Using this ranking, we generate the most promising subproblems first, by
exploring a search tree using a limited discrepancy strategy. Then the
subproblems are being solved using a constraint programming solver. To
strengthen the semidefinite relaxation, we propose to infer additional
constraints from the discrepancy structure. Computational results show that the
semidefinite relaxation is very informative, since solutions of good quality
are found in the first subproblems, or optimality is proven immediately.Comment: 14 page
On recovery guarantees for angular synchronization
The angular synchronization problem of estimating a set of unknown angles
from their known noisy pairwise differences arises in various applications. It
can be reformulated as a optimization problem on graphs involving the graph
Laplacian matrix. We consider a general, weighted version of this problem,
where the impact of the noise differs between different pairs of entries and
some of the differences are erased completely; this version arises for example
in ptychography. We study two common approaches for solving this problem,
namely eigenvector relaxation and semidefinite convex relaxation. Although some
recovery guarantees are available for both methods, their performance is either
unsatisfying or restricted to the unweighted graphs. We close this gap,
deriving recovery guarantees for the weighted problem that are completely
analogous to the unweighted version.Comment: 20 pages, 5 figure
Tightness of the maximum likelihood semidefinite relaxation for angular synchronization
Maximum likelihood estimation problems are, in general, intractable
optimization problems. As a result, it is common to approximate the maximum
likelihood estimator (MLE) using convex relaxations. In some cases, the
relaxation is tight: it recovers the true MLE. Most tightness proofs only apply
to situations where the MLE exactly recovers a planted solution (known to the
analyst). It is then sufficient to establish that the optimality conditions
hold at the planted signal. In this paper, we study an estimation problem
(angular synchronization) for which the MLE is not a simple function of the
planted solution, yet for which the convex relaxation is tight. To establish
tightness in this context, the proof is less direct because the point at which
to verify optimality conditions is not known explicitly.
Angular synchronization consists in estimating a collection of phases,
given noisy measurements of the pairwise relative phases. The MLE for angular
synchronization is the solution of a (hard) non-bipartite Grothendieck problem
over the complex numbers. We consider a stochastic model for the data: a
planted signal (that is, a ground truth set of phases) is corrupted with
non-adversarial random noise. Even though the MLE does not coincide with the
planted signal, we show that the classical semidefinite relaxation for it is
tight, with high probability. This holds even for high levels of noise.Comment: 2 figure
Similarities on Graphs: Kernels versus Proximity Measures
We analytically study proximity and distance properties of various kernels
and similarity measures on graphs. This helps to understand the mathematical
nature of such measures and can potentially be useful for recommending the
adoption of specific similarity measures in data analysis.Comment: 16 page
Unsupervised Representation Learning with Minimax Distance Measures
We investigate the use of Minimax distances to extract in a nonparametric way
the features that capture the unknown underlying patterns and structures in the
data. We develop a general-purpose and computationally efficient framework to
employ Minimax distances with many machine learning methods that perform on
numerical data. We study both computing the pairwise Minimax distances for all
pairs of objects and as well as computing the Minimax distances of all the
objects to/from a fixed (test) object.
We first efficiently compute the pairwise Minimax distances between the
objects, using the equivalence of Minimax distances over a graph and over a
minimum spanning tree constructed on that. Then, we perform an embedding of the
pairwise Minimax distances into a new vector space, such that their squared
Euclidean distances in the new space equal to the pairwise Minimax distances in
the original space. We also study the case of having multiple pairwise Minimax
matrices, instead of a single one. Thereby, we propose an embedding via first
summing up the centered matrices and then performing an eigenvalue
decomposition to obtain the relevant features.
In the following, we study computing Minimax distances from a fixed (test)
object which can be used for instance in K-nearest neighbor search. Similar to
the case of all-pair pairwise Minimax distances, we develop an efficient and
general-purpose algorithm that is applicable with any arbitrary base distance
measure. Moreover, we investigate in detail the edges selected by the Minimax
distances and thereby explore the ability of Minimax distances in detecting
outlier objects.
Finally, for each setting, we perform several experiments to demonstrate the
effectiveness of our framework.Comment: 32 page
- …