2,893 research outputs found
Reconstruction of Support of a Measure From Its Moments
In this paper, we address the problem of reconstruction of support of a
measure from its moments. More precisely, given a finite subset of the moments
of a measure, we develop a semidefinite program for approximating the support
of measure using level sets of polynomials. To solve this problem, a sequence
of convex relaxations is provided, whose optimal solution is shown to converge
to the support of measure of interest. Moreover, the provided approach is
modified to improve the results for uniform measures. Numerical examples are
presented to illustrate the performance of the proposed approach.Comment: This has been submitted to the 53rd IEEE Conference on Decision and
Contro
Getting Feasible Variable Estimates From Infeasible Ones: MRF Local Polytope Study
This paper proposes a method for construction of approximate feasible primal
solutions from dual ones for large-scale optimization problems possessing
certain separability properties. Whereas infeasible primal estimates can
typically be produced from (sub-)gradients of the dual function, it is often
not easy to project them to the primal feasible set, since the projection
itself has a complexity comparable to the complexity of the initial problem. We
propose an alternative efficient method to obtain feasibility and show that its
properties influencing the convergence to the optimum are similar to the
properties of the Euclidean projection. We apply our method to the local
polytope relaxation of inference problems for Markov Random Fields and
demonstrate its superiority over existing methods.Comment: 20 page, 4 figure
Convex Clustering via Optimal Mass Transport
We consider approximating distributions within the framework of optimal mass
transport and specialize to the problem of clustering data sets. Distances
between distributions are measured in the Wasserstein metric. The main problem
we consider is that of approximating sample distributions by ones with sparse
support. This provides a new viewpoint to clustering. We propose different
relaxations of a cardinality function which penalizes the size of the support
set. We establish that a certain relaxation provides the tightest convex lower
approximation to the cardinality penalty. We compare the performance of
alternative relaxations on a numerical study on clustering.Comment: 12 pages, 12 figure
Reduced-Dimension Linear Transform Coding of Correlated Signals in Networks
A model, called the linear transform network (LTN), is proposed to analyze
the compression and estimation of correlated signals transmitted over directed
acyclic graphs (DAGs). An LTN is a DAG network with multiple source and
receiver nodes. Source nodes transmit subspace projections of random correlated
signals by applying reduced-dimension linear transforms. The subspace
projections are linearly processed by multiple relays and routed to intended
receivers. Each receiver applies a linear estimator to approximate a subset of
the sources with minimum mean squared error (MSE) distortion. The model is
extended to include noisy networks with power constraints on transmitters. A
key task is to compute all local compression matrices and linear estimators in
the network to minimize end-to-end distortion. The non-convex problem is solved
iteratively within an optimization framework using constrained quadratic
programs (QPs). The proposed algorithm recovers as special cases the regular
and distributed Karhunen-Loeve transforms (KLTs). Cut-set lower bounds on the
distortion region of multi-source, multi-receiver networks are given for linear
coding based on convex relaxations. Cut-set lower bounds are also given for any
coding strategy based on information theory. The distortion region and
compression-estimation tradeoffs are illustrated for different communication
demands (e.g. multiple unicast), and graph structures.Comment: 33 pages, 7 figures, To appear in IEEE Transactions on Signal
Processin
Reconstructing binary images from discrete X-rays
We present a new algorithm for reconstructing binary images from their projections along a small number of directions. Our algorithm performs a sequence of related reconstructions, each using only two projections. The algorithm makes extensive use of network flow algorithms for solving the two-projection subproblems. Our experimental results demonstrate that the algorithm can compute reconstructions which resemble the original images very closely from a small number of projections, even in the presence of noise. Although the effectiveness of the algorithm is based on certain smoothness assumptions about the image, even tiny, non-smooth details are reconstructed exactly. The class of images for which the algorithm is most effective includes images of convex objects, but images of objects that contain holes or consist of multiple components can also be reconstructed with great accurac
- …