847 research outputs found
Recommended from our members
Low-Complexity Modeling for Visual Data: Representations and Algorithms
With increasing availability and diversity of visual data generated in research labs and everyday life, it is becoming critical to develop disciplined and practical computation tools for such data. This thesis focuses on the low complexity representations and algorithms for visual data, in light of recent theoretical and algorithmic developments in high-dimensional data analysis.
We first consider the problem of modeling a given dataset as superpositions of basic motifs. This model arises from several important applications, including microscopy image analysis, neural spike sorting and image deblurring. This motif-finding problem can be phrased as "short-and-sparse" blind deconvolution, in which the goal is to recover a short convolution kernel from its convolution with a sparse and random spike train. We normalize the convolution kernel to have unit Frobenius norm and then cast the blind deconvolution problem as a nonconvex optimization problem over the kernel sphere. We demonstrate that (i) in a certain region of the sphere, every local optimum is close to some shift truncation of the ground truth, when the activation spike is sufficiently sparse and long, and (ii) there exist efficient algorithms that recover some shift truncation of the ground truth under the same conditions. In addition, the geometric characterization of the local solution as well as the proposed algorithm naturally extend to more complicated sparse blind deconvolution problems, including image deblurring, convolutional dictionary learning.
We next consider the problem of modeling physical nuisances across a collection of images, in the context of illumination-invariant object detection and recognition. Illumination variation remains a central challenge in object detection and recognition. Existing analyses of illumination variation typically pertain to convex, Lambertian objects, and guarantee quality of approximation in an average case sense. We show that it is possible to build vertex-description convex cone models with worst-case performance guarantees, for nonconvex Lambertian objects. Namely, a natural detection test based on the angle to the constructed cone guarantees to accept any image which is sufficiently well approximated with an image of the object under some admissible lighting condition, and guarantees to reject any image that does not have a sufficiently approximation. The cone models are generated by sampling point illuminations with sufficient density, which follows from a new perturbation bound for point images in the Lambertian model. As the number of point images required for guaranteed detection may be large, we introduce a new formulation for cone preserving dimensionality reduction, which leverages tools from sparse and low-rank decomposition to reduce the complexity, while controlling the approximation error with respect to the original cone. Preliminary numerical experiments suggest that this approach can significantly reduce the complexity of the resulting model
Sparse approximations of protein structure from noisy random projections
Single-particle electron microscopy is a modern technique that biophysicists
employ to learn the structure of proteins. It yields data that consist of noisy
random projections of the protein structure in random directions, with the
added complication that the projection angles cannot be observed. In order to
reconstruct a three-dimensional model, the projection directions need to be
estimated by use of an ad-hoc starting estimate of the unknown particle. In
this paper we propose a methodology that does not rely on knowledge of the
projection angles, to construct an objective data-dependent low-resolution
approximation of the unknown structure that can serve as such a starting
estimate. The approach assumes that the protein admits a suitable sparse
representation, and employs discrete -regularization (LASSO) as well as
notions from shape theory to tackle the peculiar challenges involved in the
associated inverse problem. We illustrate the approach by application to the
reconstruction of an E. coli protein component called the Klenow fragment.Comment: Published in at http://dx.doi.org/10.1214/11-AOAS479 the Annals of
Applied Statistics (http://www.imstat.org/aoas/) by the Institute of
Mathematical Statistics (http://www.imstat.org
From Symmetry to Geometry: Tractable Nonconvex Problems
As science and engineering have become increasingly data-driven, the role of
optimization has expanded to touch almost every stage of the data analysis
pipeline, from the signal and data acquisition to modeling and prediction. The
optimization problems encountered in practice are often nonconvex. While
challenges vary from problem to problem, one common source of nonconvexity is
nonlinearity in the data or measurement model. Nonlinear models often exhibit
symmetries, creating complicated, nonconvex objective landscapes, with multiple
equivalent solutions. Nevertheless, simple methods (e.g., gradient descent)
often perform surprisingly well in practice.
The goal of this survey is to highlight a class of tractable nonconvex
problems, which can be understood through the lens of symmetries. These
problems exhibit a characteristic geometric structure: local minimizers are
symmetric copies of a single "ground truth" solution, while other critical
points occur at balanced superpositions of symmetric copies of the ground
truth, and exhibit negative curvature in directions that break the symmetry.
This structure enables efficient methods to obtain global minimizers. We
discuss examples of this phenomenon arising from a wide range of problems in
imaging, signal processing, and data analysis. We highlight the key role of
symmetry in shaping the objective landscape and discuss the different roles of
rotational and discrete symmetries. This area is rich with observed phenomena
and open problems; we close by highlighting directions for future research.Comment: review paper submitted to SIAM Review, 34 pages, 10 figure
Image formation in synthetic aperture radio telescopes
Next generation radio telescopes will be much larger, more sensitive, have
much larger observation bandwidth and will be capable of pointing multiple
beams simultaneously. Obtaining the sensitivity, resolution and dynamic range
supported by the receivers requires the development of new signal processing
techniques for array and atmospheric calibration as well as new imaging
techniques that are both more accurate and computationally efficient since data
volumes will be much larger. This paper provides a tutorial overview of
existing image formation techniques and outlines some of the future directions
needed for information extraction from future radio telescopes. We describe the
imaging process from measurement equation until deconvolution, both as a
Fourier inversion problem and as an array processing estimation problem. The
latter formulation enables the development of more advanced techniques based on
state of the art array processing. We demonstrate the techniques on simulated
and measured radio telescope data.Comment: 12 page
- …