8 research outputs found

    Parameter selection in sparsity-driven SAR imaging

    Get PDF
    We consider a recently developed sparsity-driven synthetic aperture radar (SAR) imaging approach which can produce superresolution, feature-enhanced images. However, this regularization-based approach requires the selection of a hyper-parameter in order to generate such high-quality images. In this paper we present a number of techniques for automatically selecting the hyper-parameter involved in this problem. In particular, we propose and develop numerical procedures for the use of Stein’s unbiased risk estimation, generalized cross-validation, and L-curve techniques for automatic parameter choice. We demonstrate and compare the effectiveness of these procedures through experiments based on both simple synthetic scenes, as well as electromagnetically simulated realistic data. Our results suggest that sparsity-driven SAR imaging coupled with the proposed automatic parameter choice procedures offers significant improvements over conventional SAR imaging

    Sparse image reconstruction for molecular imaging

    Full text link
    The application that motivates this paper is molecular imaging at the atomic level. When discretized at sub-atomic distances, the volume is inherently sparse. Noiseless measurements from an imaging technology can be modeled by convolution of the image with the system point spread function (psf). Such is the case with magnetic resonance force microscopy (MRFM), an emerging technology where imaging of an individual tobacco mosaic virus was recently demonstrated with nanometer resolution. We also consider additive white Gaussian noise (AWGN) in the measurements. Many prior works of sparse estimators have focused on the case when H has low coherence; however, the system matrix H in our application is the convolution matrix for the system psf. A typical convolution matrix has high coherence. The paper therefore does not assume a low coherence H. A discrete-continuous form of the Laplacian and atom at zero (LAZE) p.d.f. used by Johnstone and Silverman is formulated, and two sparse estimators derived by maximizing the joint p.d.f. of the observation and image conditioned on the hyperparameters. A thresholding rule that generalizes the hard and soft thresholding rule appears in the course of the derivation. This so-called hybrid thresholding rule, when used in the iterative thresholding framework, gives rise to the hybrid estimator, a generalization of the lasso. Unbiased estimates of the hyperparameters for the lasso and hybrid estimator are obtained via Stein's unbiased risk estimate (SURE). A numerical study with a Gaussian psf and two sparse images shows that the hybrid estimator outperforms the lasso.Comment: 12 pages, 8 figure

    Local Behavior of Sparse Analysis Regularization: Applications to Risk Estimation

    Full text link
    In this paper, we aim at recovering an unknown signal x0 from noisy L1measurements y=Phi*x0+w, where Phi is an ill-conditioned or singular linear operator and w accounts for some noise. To regularize such an ill-posed inverse problem, we impose an analysis sparsity prior. More precisely, the recovery is cast as a convex optimization program where the objective is the sum of a quadratic data fidelity term and a regularization term formed of the L1-norm of the correlations between the sought after signal and atoms in a given (generally overcomplete) dictionary. The L1-sparsity analysis prior is weighted by a regularization parameter lambda>0. In this paper, we prove that any minimizers of this problem is a piecewise-affine function of the observations y and the regularization parameter lambda. As a byproduct, we exploit these properties to get an objectively guided choice of lambda. In particular, we develop an extension of the Generalized Stein Unbiased Risk Estimator (GSURE) and show that it is an unbiased and reliable estimator of an appropriately defined risk. The latter encompasses special cases such as the prediction risk, the projection risk and the estimation risk. We apply these risk estimators to the special case of L1-sparsity analysis regularization. We also discuss implementation issues and propose fast algorithms to solve the L1 analysis minimization problem and to compute the associated GSURE. We finally illustrate the applicability of our framework to parameter(s) selection on several imaging problems

    Nonlocal Means With Dimensionality Reduction and SURE-Based Parameter Selection

    Full text link

    Automatic Denoising and Unmixing in Hyperspectral Image Processing

    Get PDF
    This thesis addresses two important aspects in hyperspectral image processing: automatic hyperspectral image denoising and unmixing. The first part of this thesis is devoted to a novel automatic optimized vector bilateral filter denoising algorithm, while the remainder concerns nonnegative matrix factorization with deterministic annealing for unsupervised unmixing in remote sensing hyperspectral images. The need for automatic hyperspectral image processing has been promoted by the development of potent hyperspectral systems, with hundreds of narrow contiguous bands, spanning the visible to the long wave infrared range of the electromagnetic spectrum. Due to the large volume of raw data generated by such sensors, automatic processing in the hyperspectral images processing chain is preferred to minimize human workload and achieve optimal result. Two of the mostly researched processing for such automatic effort are: hyperspectral image denoising, which is an important preprocessing step for almost all remote sensing tasks, and unsupervised unmixing, which decomposes the pixel spectra into a collection of endmember spectral signatures and their corresponding abundance fractions. Two new methodologies are introduced in this thesis to tackle the automatic processing problems described above. Vector bilateral filtering has been shown to provide good tradeoff between noise removal and edge degradation when applied to multispectral/hyperspectral image denoising. It has also been demonstrated to provide dynamic range enhancement of bands that have impaired signal to noise ratios. Typical vector bilateral filtering usage does not employ parameters that have been determined to satisfy optimality criteria. This thesis also introduces an approach for selection of the parameters of a vector bilateral filter through an optimization procedure rather than by ad hoc means. The approach is based on posing the filtering problem as one of nonlinear estimation and minimizing the Stein\u27s unbiased risk estimate (SURE) of this nonlinear estimator. Along the way, this thesis provides a plausibility argument with an analytical example as to why vector bilateral filtering outperforms band-wise 2D bilateral filtering in enhancing SNR. Experimental results show that the optimized vector bilateral filter provides improved denoising performance on multispectral images when compared to several other approaches. Non-negative matrix factorization (NMF) technique and its extensions were developed to find part based, linear representations of non-negative multivariate data. They have been shown to provide more interpretable results with realistic non-negative constrain in unsupervised learning applications such as hyperspectral imagery unmixing, image feature extraction, and data mining. This thesis extends the NMF method by incorporating deterministic annealing optimization procedure, which will help solve the non-convexity problem in NMF and provide a better choice of sparseness constrain. The approach is based on replacing the difficult non-convex optimization problem of NMF with an easier one by adding an auxiliary convex entropy constrain term and solving this first. Experiment results with hyperspectral unmixing application show that the proposed technique provides improved unmixing performance compared to other state-of-the-art methods

    Model Based Principal Component Analysis with Application to Functional Magnetic Resonance Imaging.

    Full text link
    Functional Magnetic Resonance Imaging (fMRI) has allowed better understanding of human brain organization and function by making it possible to record either autonomous or stimulus induced brain activity. After appropriate preprocessing fMRI produces a large spatio-temporal data set, which requires sophisticated signal processing. The aim of the signal processing is usually to produce spatial maps of statistics that capture the effects of interest, e.g., brain activation, time delay between stimulation and activation, or connectivity between brain regions. Two broad signal processing approaches have been pursued; univoxel methods and multivoxel methods. This proposal will focus on multivoxel methods and review Principal Component Analysis (PCA), and other closely related methods, and describe their advantages and disadvantages in fMRI research. These existing multivoxel methods have in common that they are exploratory, i.e., they are not based on a statistical model. A crucial observation which is central to this thesis, is that there is in fact an underlying model behind PCA, which we call noisy PCA (nPCA). In the main part of this thesis, we use nPCA to develop methods that solve three important problems in fMRI. 1) We introduce a novel nPCA based spatio-temporal model that combines the standard univoxel regression model with nPCA and automatically recognizes the temporal smoothness of the fMRI data. Furthermore, unlike standard univoxel methods, it can handle non-stationary noise. 2) We introduce a novel sparse variable PCA (svPCA) method that automatically excludes whole voxel timeseries, and yields sparse eigenimages. This is achieved by a novel nonlinear penalized likelihood function which is optimized. An iterative estimation algorithm is proposed that makes use of geodesic descent methods. 3) We introduce a novel method based on Stein’s Unbiased Risk Estimator (SURE) and Random Matrix Theory (RMT) to select the number of principal components for the increasingly important case where the number of observations is of similar order as the number of variables.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/57638/2/mulfarss_1.pd

    Reconstruction, Analysis and Synthesis of Collective Motion

    Get PDF
    As collective motion plays a crucial role in modern day robotics and engineering, it seems appealing to seek inspiration from nature, which abounds with examples of collective motion (starling flocks, fish schools etc.). This approach towards understanding and reverse-engineering a particular aspect of nature forms the foundation of this dissertation, and its main contribution is threefold. First we identify the importance of appropriate algorithms to extract parameters of motion from sampled observations of the trajectory, and then by assuming an appropriate generative model we turn this into a regularized inversion problem with the regularization term imposing smoothness of the reconstructed trajectory. First we assume a linear triple-integrator model, and by penalizing high values of the jerk path integral we reconstruct the trajectory through an analytical approach. Alternatively, the evolution of a trajectory can be governed by natural Frenet frame equations. Inadequacy of integrability theory for nonlinear systems poses the utmost challenge in having an analytic solution, and forces us to adopt a numerical optimization approach. However, by noting the fact that the underlying dynamics defines a left invariant vector field on a Lie group, we develop a framework based on Pontryagin's maximum principle. This approach toward data smoothing yields a semi-analytic solution. Equipped with appropriate algorithms for trajectory reconstruction we analyze flight data for biological motions, and this marks the second contribution of this dissertation. By analyzing the flight data of big brown bats in two different settings (chasing a free-flying praying mantis and competing with a conspecific to catch a tethered mealworm), we provide evidence to show the presence of a context specific switch in flight strategy. Moreover, our approach provides a way to estimate the behavioral latency associated with these foraging behaviors. On the other hand, we have also analyzed the flight data of European starling flocks, and it can be concluded from our analysis that the flock-averaged coherence (the average cosine of the angle between the velocities of a focal bird and its neighborhood center of mass, averaged over the entire flock) gets maximized by considering 5-7 nearest neighbors. The analysis also sheds some light into the underlying feedback mechanism for steering control. The third and final contribution of this dissertation lies in the domain of control law synthesis. Drawing inspiration from coherent movement of starling flocks, we introduce a strategy (Topological Velocity Alignment) for collective motion, wherein each agent aligns its velocity along the direction of motion of its neighborhood center of mass. A feedback law has also been proposed for achieving this strategy, and we have analyzed two special cases (two-body system; and an N-body system with cyclic interaction) to show effectiveness of our proposed feedback law. It has been observed through numerical simulation and robotic implementation that this approach towards collective motion can give rise to a splitting behavior
    corecore