564 research outputs found
PhaseLift: Exact and Stable Signal Recovery from Magnitude Measurements via Convex Programming
Suppose we wish to recover a signal x in C^n from m intensity measurements of
the form ||^2, i = 1, 2,..., m; that is, from data in which phase
information is missing. We prove that if the vectors z_i are sampled
independently and uniformly at random on the unit sphere, then the signal x can
be recovered exactly (up to a global phase factor) by solving a convenient
semidefinite program---a trace-norm minimization problem; this holds with large
probability provided that m is on the order of n log n, and without any
assumption about the signal whatsoever. This novel result demonstrates that in
some instances, the combinatorial phase retrieval problem can be solved by
convex programming techniques. Finally, we also prove that our methodology is
robust vis a vis additive noise
Automatic alignment for three-dimensional tomographic reconstruction
In tomographic reconstruction, the goal is to reconstruct an unknown object
from a collection of line integrals. Given a complete sampling of such line
integrals for various angles and directions, explicit inverse formulas exist to
reconstruct the object. Given noisy and incomplete measurements, the inverse
problem is typically solved through a regularized least-squares approach. A
challenge for both approaches is that in practice the exact directions and
offsets of the x-rays are only known approximately due to, e.g. calibration
errors. Such errors lead to artifacts in the reconstructed image. In the case
of sufficient sampling and geometrically simple misalignment, the measurements
can be corrected by exploiting so-called consistency conditions. In other
cases, such conditions may not apply and we have to solve an additional inverse
problem to retrieve the angles and shifts. In this paper we propose a general
algorithmic framework for retrieving these parameters in conjunction with an
algebraic reconstruction technique. The proposed approach is illustrated by
numerical examples for both simulated data and an electron tomography dataset
Addressing Integration Error for Polygonal Finite Elements Through Polynomial Projections: A Patch Test Connection
Polygonal finite elements generally do not pass the patch test as a result of
quadrature error in the evaluation of weak form integrals. In this work, we
examine the consequences of lack of polynomial consistency and show that it can
lead to a deterioration of convergence of the finite element solutions. We
propose a general remedy, inspired by techniques in the recent literature of
mimetic finite differences, for restoring consistency and thereby ensuring the
satisfaction of the patch test and recovering optimal rates of convergence. The
proposed approach, based on polynomial projections of the basis functions,
allows for the use of moderate number of integration points and brings the
computational cost of polygonal finite elements closer to that of the commonly
used linear triangles and bilinear quadrilaterals. Numerical studies of a
two-dimensional scalar diffusion problem accompany the theoretical
considerations
Maximum Likelihood for Matrices with Rank Constraints
Maximum likelihood estimation is a fundamental optimization problem in
statistics. We study this problem on manifolds of matrices with bounded rank.
These represent mixtures of distributions of two independent discrete random
variables. We determine the maximum likelihood degree for a range of
determinantal varieties, and we apply numerical algebraic geometry to compute
all critical points of their likelihood functions. This led to the discovery of
maximum likelihood duality between matrices of complementary ranks, a result
proved subsequently by Draisma and Rodriguez.Comment: 22 pages, 1 figur
Reduced Order Modeling based Inexact FETI-DP solver for lattice structures
This paper addresses the overwhelming computational resources needed with
standard numerical approaches to simulate architected materials. Those
multiscale heterogeneous lattice structures gain intensive interest in
conjunction with the improvement of additive manufacturing as they offer, among
many others, excellent stiffness-to-weight ratios. We develop here a dedicated
HPC solver that benefits from the specific nature of the underlying problem in
order to drastically reduce the computational costs (memory and time) for the
full fine-scale analysis of lattice structures. Our purpose is to take
advantage of the natural domain decomposition into cells and, even more
importantly, of the geometrical and mechanical similarities among cells. Our
solver consists in a so-called inexact FETI-DP method where the local,
cell-wise operators and solutions are approximated with reduced order modeling
techniques. Instead of considering independently every cell, we end up with
only few principal local problems to solve and make use of the corresponding
principal cell-wise operators to approximate all the others. It results in a
scalable algorithm that saves numerous local factorizations. Our solver is
applied for the isogeometric analysis of lattices built by spline composition,
which offers the opportunity to compute the reduced basis with macro-scale
data, thereby making our method also multiscale and matrix-free. The solver is
tested against various 2D and 3D analyses. It shows major gains with respect to
black-box solvers; in particular, problems of several millions of degrees of
freedom can be solved with a simple computer within few minutes.Comment: 30 pages, 12 figures, 2 table
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …