542 research outputs found
Investigating Key Techniques to Leverage the Functionality of Ground/Wall Penetrating Radar
Ground penetrating radar (GPR) has been extensively utilized as a highly efficient and non-destructive testing method for infrastructure evaluation, such as highway rebar detection, bridge decks inspection, asphalt pavement monitoring, underground pipe leakage detection, railroad ballast assessment, etc. The focus of this dissertation is to investigate the key techniques to tackle with GPR signal processing from three perspectives: (1) Removing or suppressing the radar clutter signal; (2) Detecting the underground target or the region of interest (RoI) in the GPR image; (3) Imaging the underground target to eliminate or alleviate the feature distortion and reconstructing the shape of the target with good fidelity.
In the first part of this dissertation, a low-rank and sparse representation based approach is designed to remove the clutter produced by rough ground surface reflection for impulse radar. In the second part, Hilbert Transform and 2-D Renyi entropy based statistical analysis is explored to improve RoI detection efficiency and to reduce the computational cost for more sophisticated data post-processing. In the third part, a back-projection imaging algorithm is designed for both ground-coupled and air-coupled multistatic GPR configurations. Since the refraction phenomenon at the air-ground interface is considered and the spatial offsets between the transceiver antennas are compensated in this algorithm, the data points collected by receiver antennas in time domain can be accurately mapped back to the spatial domain and the targets can be imaged in the scene space under testing. Experimental results validate that the proposed three-stage cascade signal processing methodologies can improve the performance of GPR system
Modern GPR Target Recognition Methods
Traditional GPR target recognition methods include pre-processing the data by
removal of noisy signatures, dewowing (high-pass filtering to remove
low-frequency noise), filtering, deconvolution, migration (correction of the
effect of survey geometry), and can rely on the simulation of GPR responses.
The techniques usually suffer from the loss of information, inability to adapt
from prior results, and inefficient performance in the presence of strong
clutter and noise. To address these challenges, several advanced processing
methods have been developed over the past decade to enhance GPR target
recognition. In this chapter, we provide an overview of these modern GPR
processing techniques. In particular, we focus on the following methods:
adaptive receive processing of range profiles depending on the target
environment; adoption of learning-based methods so that the radar utilizes the
results from prior measurements; application of methods that exploit the fact
that the target scene is sparse in some domain or dictionary; application of
advanced classification techniques; and convolutional coding which provides
succinct and representatives features of the targets. We describe each of these
techniques or their combinations through a representative application of
landmine detection.Comment: Book chapter, 56 pages, 17 figures, 12 tables. arXiv admin note:
substantial text overlap with arXiv:1806.0459
Feature and Decision Level Fusion Using Multiple Kernel Learning and Fuzzy Integrals
The work collected in this dissertation addresses the problem of data fusion. In other words, this is the problem of making decisions (also known as the problem of classification in the machine learning and statistics communities) when data from multiple sources are available, or when decisions/confidence levels from a panel of decision-makers are accessible. This problem has become increasingly important in recent years, especially with the ever-increasing popularity of autonomous systems outfitted with suites of sensors and the dawn of the ``age of big data.\u27\u27 While data fusion is a very broad topic, the work in this dissertation considers two very specific techniques: feature-level fusion and decision-level fusion. In general, the fusion methods proposed throughout this dissertation rely on kernel methods and fuzzy integrals. Both are very powerful tools, however, they also come with challenges, some of which are summarized below. I address these challenges in this dissertation.
Kernel methods for classification is a well-studied area in which data are implicitly mapped from a lower-dimensional space to a higher-dimensional space to improve classification accuracy. However, for most kernel methods, one must still choose a kernel to use for the problem. Since there is, in general, no way of knowing which kernel is the best, multiple kernel learning (MKL) is a technique used to learn the aggregation of a set of valid kernels into a single (ideally) superior kernel. The aggregation can be done using weighted sums of the pre-computed kernels, but determining the summation weights is not a trivial task. Furthermore, MKL does not work well with large datasets because of limited storage space and prediction speed. These challenges are tackled by the introduction of many new algorithms in the following chapters. I also address MKL\u27s storage and speed drawbacks, allowing MKL-based techniques to be applied to big data efficiently.
Some algorithms in this work are based on the Choquet fuzzy integral, a powerful nonlinear aggregation operator parameterized by the fuzzy measure (FM). These decision-level fusion algorithms learn a fuzzy measure by minimizing a sum of squared error (SSE) criterion based on a set of training data. The flexibility of the Choquet integral comes with a cost, however---given a set of N decision makers, the size of the FM the algorithm must learn is 2N. This means that the training data must be diverse enough to include 2N independent observations, though this is rarely encountered in practice. I address this in the following chapters via many different regularization functions, a popular technique in machine learning and statistics used to prevent overfitting and increase model generalization. Finally, it is worth noting that the aggregation behavior of the Choquet integral is not intuitive. I tackle this by proposing a quantitative visualization strategy allowing the FM and Choquet integral behavior to be shown simultaneously
Scalable learning for geostatistics and speaker recognition
With improved data acquisition methods, the amount of data that is being collected has increased severalfold. One of the objectives in data collection is to learn useful underlying patterns. In order to work with data at this scale, the methods not only need to be effective with the underlying data, but also have to be scalable to handle larger data collections. This thesis focuses on developing scalable and effective methods targeted towards different domains, geostatistics and speaker recognition in particular.
Initially we focus on kernel based learning methods and develop a GPU based parallel framework for this class of problems. An improved numerical algorithm that utilizes the GPU parallelization to further enhance the computational performance of kernel regression is proposed. These methods are then demonstrated on problems arising in geostatistics and speaker recognition.
In geostatistics, data is often collected at scattered locations and factors like instrument malfunctioning lead to missing observations. Applications often require the ability interpolate this scattered spatiotemporal data on to a regular grid continuously over time. This problem can be formulated as a regression problem, and one of the most popular geostatistical interpolation techniques, kriging is analogous to a standard kernel method: Gaussian process regression. Kriging is computationally expensive and needs major modifications and accelerations in order to be used practically. The GPU framework developed for kernel methods is extended to kriging and further the GPU's texture memory is better utilized for enhanced computational performance.
Speaker recognition deals with the task of verifying a person's identity based on samples of his/her speech - "utterances". This thesis focuses on text-independent framework and three new recognition frameworks were developed for this problem. We proposed a kernelized Renyi distance based similarity scoring for speaker recognition. While its performance is promising, it does not generalize well for limited training data and therefore does not compare well to state-of-the-art recognition systems. These systems compensate for the variability in the speech data due to the message, channel variability, noise and reverberation. State-of-the-art systems model each speaker as a mixture of Gaussians (GMM) and compensate for the variability (termed "nuisance"). We propose a novel discriminative framework using a latent variable technique, partial least squares (PLS), for improved recognition. The kernelized version of this algorithm is used to achieve a state of the art speaker ID system, that shows results competitive with the best systems reported on in NIST's 2010 Speaker Recognition Evaluation
Measurement Matrix Design for Compressive Sensing Based MIMO Radar
In colocated multiple-input multiple-output (MIMO) radar using compressive
sensing (CS), a receive node compresses its received signal via a linear
transformation, referred to as measurement matrix. The samples are subsequently
forwarded to a fusion center, where an L1-optimization problem is formulated
and solved for target information. CS-based MIMO radar exploits the target
sparsity in the angle-Doppler-range space and thus achieves the high
localization performance of traditional MIMO radar but with many fewer
measurements. The measurement matrix is vital for CS recovery performance. This
paper considers the design of measurement matrices that achieve an optimality
criterion that depends on the coherence of the sensing matrix (CSM) and/or
signal-to-interference ratio (SIR). The first approach minimizes a performance
penalty that is a linear combination of CSM and the inverse SIR. The second one
imposes a structure on the measurement matrix and determines the parameters
involved so that the SIR is enhanced. Depending on the transmit waveforms, the
second approach can significantly improve SIR, while maintaining CSM comparable
to that of the Gaussian random measurement matrix (GRMM). Simulations indicate
that the proposed measurement matrices can improve detection accuracy as
compared to a GRMM
- …