48 research outputs found
Sparse and Redundant Representations for Inverse Problems and Recognition
Sparse and redundant representation of data enables the
description of signals as linear combinations of a few atoms from
a dictionary. In this dissertation, we study applications of
sparse and redundant representations in inverse problems and
object recognition. Furthermore, we propose two novel imaging
modalities based on the recently introduced theory of Compressed
Sensing (CS).
This dissertation consists of four major parts. In the first part
of the dissertation, we study a new type of deconvolution
algorithm that is based on estimating the image from a shearlet
decomposition. Shearlets provide a multi-directional and
multi-scale decomposition that has been mathematically shown to
represent distributed discontinuities such as edges better than
traditional wavelets. We develop a deconvolution algorithm that
allows for the approximation inversion operator to be controlled
on a multi-scale and multi-directional basis. Furthermore, we
develop a method for the automatic determination of the threshold
values for the noise shrinkage for each scale and direction
without explicit knowledge of the noise variance using a
generalized cross validation method.
In the second part of the dissertation, we study a reconstruction
method that recovers highly undersampled images assumed to have a
sparse representation in a gradient domain by using partial
measurement samples that are collected in the Fourier domain. Our
method makes use of a robust generalized Poisson solver that
greatly aids in achieving a significantly improved performance
over similar proposed methods. We will demonstrate by experiments
that this new technique is more flexible to work with either
random or restricted sampling scenarios better than its
competitors.
In the third part of the dissertation, we introduce a novel
Synthetic Aperture Radar (SAR) imaging modality which can provide
a high resolution map of the spatial distribution of targets and
terrain using a significantly reduced number of needed transmitted
and/or received electromagnetic waveforms. We demonstrate that
this new imaging scheme, requires no new hardware components and
allows the aperture to be compressed. Also, it
presents many new applications and advantages which include strong
resistance to countermesasures and interception, imaging much
wider swaths and reduced on-board storage requirements.
The last part of the dissertation deals with object recognition
based on learning dictionaries for simultaneous sparse signal
approximations and feature extraction. A dictionary is learned
for each object class based on given training examples which
minimize the representation error with a sparseness constraint. A
novel test image is then projected onto the span of the atoms in
each learned dictionary. The residual vectors along with the
coefficients are then used for recognition. Applications to
illumination robust face recognition and automatic target
recognition are presented
Investigating Key Techniques to Leverage the Functionality of Ground/Wall Penetrating Radar
Ground penetrating radar (GPR) has been extensively utilized as a highly efficient and non-destructive testing method for infrastructure evaluation, such as highway rebar detection, bridge decks inspection, asphalt pavement monitoring, underground pipe leakage detection, railroad ballast assessment, etc. The focus of this dissertation is to investigate the key techniques to tackle with GPR signal processing from three perspectives: (1) Removing or suppressing the radar clutter signal; (2) Detecting the underground target or the region of interest (RoI) in the GPR image; (3) Imaging the underground target to eliminate or alleviate the feature distortion and reconstructing the shape of the target with good fidelity.
In the first part of this dissertation, a low-rank and sparse representation based approach is designed to remove the clutter produced by rough ground surface reflection for impulse radar. In the second part, Hilbert Transform and 2-D Renyi entropy based statistical analysis is explored to improve RoI detection efficiency and to reduce the computational cost for more sophisticated data post-processing. In the third part, a back-projection imaging algorithm is designed for both ground-coupled and air-coupled multistatic GPR configurations. Since the refraction phenomenon at the air-ground interface is considered and the spatial offsets between the transceiver antennas are compensated in this algorithm, the data points collected by receiver antennas in time domain can be accurately mapped back to the spatial domain and the targets can be imaged in the scene space under testing. Experimental results validate that the proposed three-stage cascade signal processing methodologies can improve the performance of GPR system
Exploring scatterer anisotrophy in synthetic aperture radar via sub-aperture analysis
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2001.Includes bibliographical references (p. 189-193).Scattering from man-made objects in SAR imagery exhibits aspect and frequency dependencies which are not always well modeled by standard SAR imaging techniques based on the ideal point scattering model. This is particularly the case for highresolution wide-band and wide-aperture data where model deviations are even more pronounced. If ignored, these deviations will reduce recognition performance due to the model mismatch, but when appropriately accounted for, these deviations from the ideal point scattering model can be exploited as attributes to better distinguish scatterers and their respective targets. With this in mind, this thesis develops an efficient modeling framework based on a sub-aperture pyramid to utilize scatterer anisotropy for the purpose of target classification. Two approaches are presented to exploit scatterer anisotropy using the sub-aperture pyramid. The first is a nonparametric classifier that learns the azimuthal dependencies within an image and makes a classification decision based on the learned dependencies. The second approach is a parametric attribution of the observed anisotropy characterizing the azimuthal location and concentration of the scattering response. Working from the sub-aperture scattering model, we develop a hypothesis test to characterize anisotropy. We start with an isolated scatterer model which produces a test with an intuitive interpretation. We then address the problem of robustness to interfering scatterers by extending the model to account for neighboring scatterers which corrupt the anisotropy attribution.(cont.) The development of the anisotropy attribution culminates with an iterative attribution approach that identifies and compensates for neighboring scatterers. In the course of the development of the anisotropy attribution, we also study the relationship between scatterer phenomenology and our anisotropy attribution. This analysis reveals the information provided by the anisotropy attribution for two common sources of anisotropy. Furthermore, the analysis explicitly demonstrates the benefit of using wide-aperture data to produce more stable and more descriptive characterizations of scatterer anisotropy.y Andrew J. Kim.Ph.D
Sensor Signal and Information Processing II
In the current age of information explosion, newly invented technological sensors and software are now tightly integrated with our everyday lives. Many sensor processing algorithms have incorporated some forms of computational intelligence as part of their core framework in problem solving. These algorithms have the capacity to generalize and discover knowledge for themselves and learn new information whenever unseen data are captured. The primary aim of sensor processing is to develop techniques to interpret, understand, and act on information contained in the data. The interest of this book is in developing intelligent signal processing in order to pave the way for smart sensors. This involves mathematical advancement of nonlinear signal processing theory and its applications that extend far beyond traditional techniques. It bridges the boundary between theory and application, developing novel theoretically inspired methodologies targeting both longstanding and emergent signal processing applications. The topic ranges from phishing detection to integration of terrestrial laser scanning, and from fault diagnosis to bio-inspiring filtering. The book will appeal to established practitioners, along with researchers and students in the emerging field of smart sensors processing
Improved Dictionary Formation and Search for Synthetic Aperture Radar Canonical Shape Feature Extraction
ATR requires detecting and estimating distinguishing characteristics of a target of interest. Radar data provides range and amplitude information; range distinguishes location relative to the radar whereas amplitude determines strength of reflectivity. Strong reflecting scattering features of targets are detected from a combination of radar returns, or radar PH data. Strong scatterers are modeled as canonical shapes (a plate, dihedral, trihedral, sphere, cylinder, or top-hat). Modeling the scatterers as canonical shapes takes the high dimensional radar PH from each scatterer and parameterizes the scatterer according to its location, size, and orientation. This thesis e ciently estimates the parameters of canonical shapes from radar PH data using dictionary search. Target scattering peaks are detected using 2-D SAR imaging. The parameters are estimated with decreased computation and improved accuracy relative to previous algorithms through reduced SAR image processing, informed parameter subspace bounding, and more e cient dictionary clustering. The effects of the collection fight path and radar parameters are investigated to permit pre-collection error analysis. The results show that even for a limited collection geometry, the dictionary estimates the canonical shape scatterer parameters well
MIMO Radar Waveform Design and Sparse Reconstruction for Extended Target Detection in Clutter
This dissertation explores the detection and false alarm rate performance of a novel transmit-waveform and receiver filter design algorithm as part of a larger Compressed Sensing (CS) based Multiple Input Multiple Output (MIMO) bistatic radar system amidst clutter. Transmit-waveforms and receiver filters were jointly designed using an algorithm that minimizes the mutual coherence of the combined transmit-waveform, target frequency response, and receiver filter matrix product as a design criterion. This work considered the Probability of Detection (P D) and Probability of False Alarm (P FA) curves relative to a detection threshold, τ th, Receiver Operating Characteristic (ROC), reconstruction error and mutual coherence measures for performance characterization of the design algorithm to detect both known and fluctuating targets and amidst realistic clutter and noise. Furthermore, this work paired the joint waveform-receiver filter design algorithm with multiple sparse reconstruction algorithms, including: Regularized Orthogonal Matching Pursuit (ROMP), Compressive Sampling Matching Pursuit (CoSaMP) and Complex Approximate Message Passing (CAMP) algorithms. It was found that the transmit-waveform and receiver filter design algorithm significantly outperforms statically designed, benchmark waveforms for the detection of both known and fluctuating extended targets across all tested sparse reconstruction algorithms. In particular, CoSaMP was specified to minimize the maximum allowable P FA of the CS radar system as compared to the baseline ROMP sparse reconstruction algorithm of previous work. However, while the designed waveforms do provide performance gains and CoSaMP affords a reduced peak false alarm rate as compared to the previous work, fluctuating target impulse responses and clutter severely hampered CS radar performance when either of these sparse reconstruction techniques were implemented. To improve detection rate and, by extension, ROC performance of the CS radar system under non-ideal conditions, this work implemented the CAMP sparse reconstruction algorithm in the CS radar system. It was found that detection rates vastly improve with the implementation of CAMP, especially in the case of fluctuating target impulse responses amidst clutter or at low receive signal to noise ratios (β n). Furthermore, where previous work considered a τ th=0, the implementation of a variable τ th in this work offered novel trade off between P D and P FA in radar design to the CS radar system. In the simulated radar scene it was found that τ th could be moderately increased retaining the same or similar P D while drastically improving P FA. This suggests that the selection and specification of the sparse reconstruction algorithm and corresponding τ th for this radar system is not trivial. Rather, a tradeoff was noted between P D and P FA based on the choice and parameters of the sparse reconstruction technique and detection threshold, highlighting an engineering trade-space in CS radar system design. Thus, in CS radar system design, the radar designer must carefully choose and specify the sparse reconstruction technique and appropriate detection threshold in addition to transmit-waveforms, receiver filters and building the dictionary of target impulse responses for detection in the radar scene
The University Defence Research Collaboration In Signal Processing
This chapter describes the development of algorithms for automatic detection of anomalies from multi-dimensional, undersampled and incomplete datasets. The challenge in this work is to identify and classify behaviours as normal or abnormal, safe or threatening, from an irregular and often heterogeneous sensor network. Many defence and civilian applications can be modelled as complex networks of interconnected nodes with unknown or uncertain spatio-temporal relations. The behavior of such heterogeneous networks can exhibit dynamic properties, reflecting evolution in both network structure (new nodes appearing and existing nodes disappearing), as well as inter-node relations.
The UDRC work has addressed not only the detection of anomalies, but also the identification of their nature and their statistical characteristics. Normal patterns and changes in behavior have been incorporated to provide an acceptable balance between true positive rate, false positive rate, performance and computational cost. Data quality measures have been used to ensure the models of normality are not corrupted by unreliable and ambiguous data. The context for the activity of each node in complex networks offers an even more efficient anomaly detection mechanism. This has allowed the development of efficient approaches which not only detect anomalies but which also go on to classify their behaviour
A Tutorial on Speckle Reduction in Synthetic Aperture Radar Images
Speckle is a granular disturbance, usually modeled as a multiplicative noise, that affects synthetic aperture radar (SAR) images, as well as all coherent images. Over the last three decades, several methods have been proposed for the reduction of speckle, or despeckling, in SAR images. Goal of this paper is making a comprehensive review of despeckling methods since their birth, over thirty years ago, highlighting trends and changing approaches over years. The concept of fully developed speckle is explained. Drawbacks of homomorphic filtering are pointed out. Assets of multiresolution despeckling, as opposite to spatial-domain despeckling, are highlighted. Also advantages of undecimated, or stationary, wavelet transforms over decimated ones are discussed. Bayesian estimators and probability density function (pdf) models in both spatial and multiresolution domains are reviewed. Scale-space varying pdf models, as opposite to scale varying models, are promoted. Promising methods following non-Bayesian approaches, like nonlocal (NL) filtering and total variation (TV) regularization, are reviewed and compared to spatial- and wavelet-domain Bayesian filters. Both established and new trends for assessment of despeckling are presented. A few experiments on simulated data and real COSMO-SkyMed SAR images highlight, on one side the costperformance tradeoff of the different methods, on the other side the effectiveness of solutions purposely designed for SAR heterogeneity and not fully developed speckle. Eventually, upcoming methods based on new concepts of signal processing, like compressive sensing, are foreseen as a new generation of despeckling, after spatial-domain and multiresolution-domain method