24 research outputs found

    Context-dependent fusion with application to landmine detection.

    Get PDF
    Traditional machine learning and pattern recognition systems use a feature descriptor to describe the sensor data and a particular classifier (also called expert or learner ) to determine the true class of a given pattern. However, for complex detection and classification problems, involving data with large intra-class variations and noisy inputs, no single source of information can provide a satisfactory solution. As a result, combination of multiple classifiers is playing an increasing role in solving these complex pattern recognition problems, and has proven to be viable alternative to using a single classifier. In this thesis we introduce a new Context-Dependent Fusion (CDF) approach, We use this method to fuse multiple algorithms which use different types of features and different classification methods on multiple sensor data. The proposed approach is motivated by the observation that there is no single algorithm that can consistently outperform all other algorithms. In fact, the relative performance of different algorithms can vary significantly depending on several factions such as extracted features, and characteristics of the target class. The CDF method is a local approach that adapts the fusion method to different regions of the feature space. The goal is to take advantages of the strengths of few algorithms in different regions of the feature space without being affected by the weaknesses of the other algorithms and also avoiding the loss of potentially valuable information provided by few weak classifiers by considering their output as well. The proposed fusion has three main interacting components. The first component, called Context Extraction, partitions the composite feature space into groups of similar signatures, or contexts. Then, the second component assigns an aggregation weight to each detector\u27s decision in each context based on its relative performance within the context. The third component combines the multiple decisions, using the learned weights, to make a final decision. For Context Extraction component, a novel algorithm that performs clustering and feature discrimination is used to cluster the composite feature space and identify the relevant features for each cluster. For the fusion component, six different methods were proposed and investigated. The proposed approached were applied to the problem of landmine detection. Detection and removal of landmines is a serious problem affecting civilians and soldiers worldwide. Several detection algorithms on landmine have been proposed. Extensive testing of these methods has shown that the relative performance of different detectors can vary significantly depending on the mine type, geographical site, soil and weather conditions, and burial depth, etc. Therefore, multi-algorithm, and multi-sensor fusion is a critical component in land mine detection. Results on large and diverse real data collections show that the proposed method can identify meaningful and coherent clusters and that different expert algorithms can be identified for the different contexts. Our experiments have also indicated that the context-dependent fusion outperforms all individual detectors and several global fusion methods

    The SIMCA algorithm for processing ground penetrating radar data and its practical applications

    Get PDF
    The main objective of this thesis is to present a new image processing technique to improve the detectability of buried objects such as landmines using Ground Penetrating Radar (GPR). The main challenge of GPR based landmine detection is to have an accurate image analysis method that is capable of reducing false alarms. However an accurate image relies on having sufficient spatial resolution in the received signal. An Antipersonnel mine (APM) can have a diameter as little as 2cm, whereas many soils have very high attenuation at frequencies above 450 MHz. In order to solve the detection problem, a system level analysis of the issues involved with the recognition of landmines using image reconstruction is required. The thesis illustrates the development of a novel technique called the SIMCA (“SIMulated Correlation Algorithm”) based on area or volume correlation between the trace that would be returned by an ideal point reflector in the soil conditions at the site (obtained using the realistic simulation of Maxwell’s equations) and the actual trace. During an initialization phase, SIMCA carries out radar simulation using the system parameters of the radar and the soil properties. Then SIMCA takes the raw data as the radar is scanned over the ground and uses a clutter removal technique to remove various unwanted signals of clutter such as cross talk, initial ground reflection and antenna ringing. The trace which would be returned by a target under these conditions is then used to form a correlation kernel using a GPR simulator. The 2D GPR scan (B scan), formed by abutting successive time-amplitude plots taken from different spatial positions as column vectors,is then correlated with the kernel using the Pearson correlation coefficient resulting in a correlated image which is brightest at points most similar to the canonical target. This image is then raised to an odd power >2 to enhance the target/background separation. The first part of the thesis presents a 2-dimensional technique using the B scans which have been produced as a result of correlating the clutter removed radargram (’B scan’) with the kernel produced from the simulation. In order to validate the SIMCA 2D algorithm, qualitative evidence was used where comparison was made between the B scans produced by the SIMCA algorithm with B scans from some other techniques which are the best alternative systems reported in the open literature. It was found from this that the SIMCA algorithm clearly produces clearer B scans in comparison to the other techniques. Next quantitative evidence was used to validate the SIMCA algorithm and demonstrate that it produced clear images. Two methods are used to obtain this quantitative evidence. In the first method an expert GPR user and 4 other general users are used to predict the location of landmines from the correlated B scans and validate the SIMCA 2D algorithm. Here human users are asked to indicate the location of targets from a printed sheet of paper which shows the correlated B scans produced by the SIMCA algorithm after some training, bearing in mind that it is a blind test. For the second quantitative evidence method, the AMIRA software is used to obtain values of the burial depth and position of the target in the x direction and hence validate the SIMCA 2D algorithm. Then the absolute error values for the burial depth along with the absolute error values for the position in the x direction obtained from the SIMCA algorithm and the Scheers et al’s algorithm when compared to the corresponding ground truth values were calculated. Two-dimensional techniques that use B scans do not give accurate information on the shape and dimensions of the buried target, in comparison to 3D techniques that use 3D data (’C scans’). As a result the next part of the thesis presents a 3-dimensional technique. The equivalent 3D kernel is formed by rotating the 2D kernel produced by the simulation along the polar co-ordinates, whilst the 3D data is the clutter removed C scan. Then volume correlation is performed between the intersecting parts of the kernel and the data. This data is used to create iso-surfaces of the slices raised to an odd power > 2. To validate the algorithm an objective validation process which compares the actual target volume to that produced by the re-construction process is used. The SIMCA 3D technique and the Scheers et al’s (the best alternative system reported in the open literature) technique are used to image a variety of landmines using GPR scans. The types of mines included plastic, wooden and glass ones. In all cases clear images were obtained with SIMCA. In contrast Scheers’ algorithm, the present state-of-the-art, failed to provide clear images of non metallic landmines. For this thesis, the above algorithms have been tested for landmine data and for locating foundations in demolished buildings and to validate and demonstrate that the SIMCA algorithms are better than existing technologies such as the Scheers et al’s method and the REFLEXW commercial software

    A generic framework for context-dependent fusion with application to landmine detection.

    Get PDF
    For complex detection and classification problems, involving data with large intra-class variations and noisy inputs, no single source of information can provide a satisfactory solution. As a result, combination of multiple classifiers is playing an increasing role in solving these complex pattern recognition problems, and has proven to be a viable alternative to using a single classifier. Over the past few years, a variety of schemes have been proposed for combining multiple classifiers. Most of these were global as they assign a degree of worthiness to each classifier, that is averaged over the entire training data. This may not be the optimal way to combine the different experts since the behavior of each one may not be uniform over the different regions of the feature space. To overcome this issue, few local methods have been proposed in the last few years. Local fusion methods aim to adapt the classifiers\u27 worthiness to different regions of the feature space. First, they partition the input samples. Then, they identify the best classifier for each partition and designate it as the expert for that partition. Unfortunately, current local methods are either computationally expensive and/or perform these two tasks independently of each other. However, feature space partition and algorithm selection are not independent and their optimization should be simultaneous. In this dissertation, we introduce a new local fusion approach, called Context Extraction for Local Fusion (CELF). CELF was designed to adapt the fusion to different regions of the feature space. It takes advantage of the strength of the different experts and overcome their limitations. First, we describe the baseline CELF algorithm. We formulate a novel objective function that combines context identification and multi-algorithm fusion criteria into a joint objective function. The context identification component thrives to partition the input feature space into different clusters (called contexts), while the fusion component thrives to learn the optimal fusion parameters within each cluster. Second, we propose several variations of CELF to deal with different applications scenario. In particular, we propose an extension that includes a feature discrimination component (CELF-FD). This version is advantageous when dealing with high dimensional feature spaces and/or when the number of features extracted by the individual algorithms varies significantly. CELF-CA is another extension of CELF that adds a regularization term to the objective function to introduce competition among the clusters and to find the optimal number of clusters in an unsupervised way. CELF-CA starts by partitioning the data into a large number of small clusters. As the algorithm progresses, adjacent clusters compete for data points, and clusters that lose the competition gradually become depleted and vanish. Third, we propose CELF-M that generalizes CELF to support multiple classes data sets. The baseline CELF and its extensions were formulated to use linear aggregation to combine the output of the different algorithms within each context. For some applications, this can be too restrictive and non-linear fusion may be needed. To address this potential drawback, we propose two other variations of CELF that use non-linear aggregation. The first one is based on Neural Networks (CELF-NN) and the second one is based on Fuzzy Integrals (CELF-FI). The latter one has the desirable property of assigning weights to subsets of classifiers to take into account the interaction between them. To test a new signature using CELF (or its variants), each algorithm would extract its set of features and assigns a confidence value. Then, the features are used to identify the best context, and the fusion parameters of this context are used to fuse the individual confidence values. For each variation of CELF, we formulate an objective function, derive the necessary conditions to optimize it, and construct an iterative algorithm. Then we use examples to illustrate the behavior of the algorithm, compare it to global fusion, and highlight its advantages. We apply our proposed fusion methods to the problem of landmine detection. We use data collected using Ground Penetration Radar (GPR) and Wideband Electro -Magnetic Induction (WEMI) sensors. We show that CELF (and its variants) can identify meaningful and coherent contexts (e.g. mines of same type, mines buried at the same site, etc.) and that different expert algorithms can be identified for the different contexts. In addition to the land mine detection application, we apply our approaches to semantic video indexing, image database categorization, and phoneme recognition. In all applications, we compare the performance of CELF with standard fusion methods, and show that our approach outperforms all these methods

    Investigating Key Techniques to Leverage the Functionality of Ground/Wall Penetrating Radar

    Get PDF
    Ground penetrating radar (GPR) has been extensively utilized as a highly efficient and non-destructive testing method for infrastructure evaluation, such as highway rebar detection, bridge decks inspection, asphalt pavement monitoring, underground pipe leakage detection, railroad ballast assessment, etc. The focus of this dissertation is to investigate the key techniques to tackle with GPR signal processing from three perspectives: (1) Removing or suppressing the radar clutter signal; (2) Detecting the underground target or the region of interest (RoI) in the GPR image; (3) Imaging the underground target to eliminate or alleviate the feature distortion and reconstructing the shape of the target with good fidelity. In the first part of this dissertation, a low-rank and sparse representation based approach is designed to remove the clutter produced by rough ground surface reflection for impulse radar. In the second part, Hilbert Transform and 2-D Renyi entropy based statistical analysis is explored to improve RoI detection efficiency and to reduce the computational cost for more sophisticated data post-processing. In the third part, a back-projection imaging algorithm is designed for both ground-coupled and air-coupled multistatic GPR configurations. Since the refraction phenomenon at the air-ground interface is considered and the spatial offsets between the transceiver antennas are compensated in this algorithm, the data points collected by receiver antennas in time domain can be accurately mapped back to the spatial domain and the targets can be imaged in the scene space under testing. Experimental results validate that the proposed three-stage cascade signal processing methodologies can improve the performance of GPR system

    Generalized multi-stream hidden Markov models.

    Get PDF
    For complex classification systems, data is usually gathered from multiple sources of information that have varying degree of reliability. In fact, assuming that the different sources have the same relevance in describing all the data might lead to an erroneous behavior. The classification error accumulates and can be more severe for temporal data where each sample is represented by a sequence of observations. Thus, there is compelling evidence that learning algorithms should include a relevance weight for each source of information (stream) as a parameter that needs to be learned. In this dissertation, we assumed that the multi-stream temporal data is generated by independent and synchronous streams. Using this assumption, we develop, implement, and test multi- stream continuous and discrete hidden Markov model (HMM) algorithms. For the discrete case, we propose two new approaches to generalize the baseline discrete HMM. The first one combines unsupervised learning, feature discrimination, standard discrete HMMs and weighted distances to learn the codebook with feature-dependent weights for each symbol. The second approach consists of modifying the HMM structure to include stream relevance weights, generalizing the standard discrete Baum-Welch learning algorithm, and deriving the necessary conditions to optimize all model parameters simultaneously. We also generalize the minimum classification error (MCE) discriminative training algorithm to include stream relevance weights. For the continuous HMM, we introduce a. new approach that integrates the stream relevance weights in the objective function. Our approach is based on the linearization of the probability density function. Two variations are proposed: the mixture and state level variations. As in the discrete case, we generalize the continuous Baum-Welch learning algorithm to accommodate these changes, and we derive the necessary conditions for updating the model parameters. We also generalize the MCE learning algorithm to derive the necessary conditions for the model parameters\u27 update. The proposed discrete and continuous HMM are tested on synthetic data sets. They are also validated on various applications including Australian Sign Language, audio classification, face classification, and more extensively on the problem of landmine detection using ground penetrating radar data. For all applications, we show that considerable improvement can be achieved compared to the baseline HMM and the existing multi-stream HMM algorithms

    Experimental Evaluation of Several Key Factors Affecting Root Biomass Estimation by 1500 MHz Ground-Penetrating Radar

    Get PDF
    Accurate quantification of coarse roots without disturbance represents a gap in our understanding of belowground ecology. Ground penetrating radar (GPR) has shown significant promise for coarse root detection and measurement, however root orientation relative to scanning transect direction, the difficulty identifying dead root mass, and the effects of root shadowing are all key factors affecting biomass estimation that require additional research. Specifically, many aspects of GPR applicability for coarse root measurement have not been tested with a full range of antenna frequencies. We tested the effects of multiple scanning directions, root crossover, and root versus soil moisture content in a sand-hill mixed oak community using a 1500 MHz antenna, which provides higher resolution than the oft used 900 MHz antenna. Combining four scanning directions produced a significant relationship between GPR signal reflectance and coarse root biomass (R2 = 0.75) (p \u3c 0.01) and reduced variability encountered when fewer scanning directions were used. Additionally, significantly fewer roots were correctly identified when their moisture content was allowed to equalize with the surrounding soil (p \u3c 0.01), providing evidence to support assertions that GPR cannot reliably identify dead root mass. The 1500 MHz antenna was able to identify roots in close proximity of each other as well as roots shadowed beneath shallower roots, providing higher precision than a 900 MHz antenna. As expected, using a 1500 MHz antenna eliminates some of the deficiency in precision observed in studies that utilized lower frequency antennas

    Ensemble learning method for hidden markov models.

    Get PDF
    For complex classification systems, data are gathered from various sources and potentially have different representations. Thus, data may have large intra-class variations. In fact, modeling each data class with a single model might lead to poor generalization. The classification error can be more severe for temporal data where each sample is represented by a sequence of observations. Thus, there is a need for building a classification system that takes into account the variations within each class in the data. This dissertation introduces an ensemble learning method for temporal data that uses a mixture of Hidden Markov Model (HMM) classifiers. We hypothesize that the data are generated by K models, each of which reacts a particular trend in the data. Model identification could be achieved through clustering in the feature space or in the parameters space. However, this approach is inappropriate in the context of sequential data. The proposed approach is based on clustering in the log-likelihood space, and has two main steps. First, one HMM is fit to each of the N individual sequences. For each fitted model, we evaluate the log-likelihood of each sequence. This will result in an N-by-N log-likelihood distance matrix that will be partitioned into K groups using a relational clustering algorithm. In the second step, we learn the parameters of one HMM per group. We propose using and optimizing various training approaches for the different K groups depending on their size and homogeneity. In particular, we investigate the maximum likelihood (ML), the minimum classification error (MCE) based discriminative, and the Variational Bayesian (VB) training approaches. Finally, to test a new sequence, its likelihood is computed in all the models and a final confidence value is assigned by combining the multiple models outputs using a decision level fusion method such as an artificial neural network or a hierarchical mixture of experts. Our approach was evaluated on two real-world applications: (1) identification of Cardio-Pulmonary Resuscitation (CPR) scenes in video simulating medical crises; and (2) landmine detection using Ground Penetrating Radar (GPR). Results on both applications show that the proposed method can identify meaningful and coherent HMM mixture components that describe different properties of the data. Each HMM mixture component models a group of data that share common attributes. The results indicate that the proposed method outperforms the baseline HMM that uses one model for each class in the data

    DETERMINE: Novel Radar Techniques for Humanitarian Demining

    Get PDF
    Today the plague of landmines represent one of the greatest curses of modern time, killing and maiming innocent people every day. It is not easy to provide a global estimate of the problem dimension, however, reported casualties describe that the majority of the victims are civilians, with almost a half represented by children. Among all the technologies that are currently employed for landmine clearance, Ground Penetrating Radar (GPR) is one of those expected to increase the efficiency of operation, even if its high-resolution imaging capability and the possibility of detecting also non-metallic landmines are unfortunately balanced by the high sensor false alarm rate. Most landmines may be considered as multiple layered dielectric cylinders that interact with each other to produce multiple reflections, which will be not the case for other common clutter objects. Considering that each scattering component has its own angular radiation pattern, the research has evaluated the improvements that multistatic configurations could bring to the collected information content. Employing representative landmine models, a number of experimental campaigns have confirmed that GPR is capable of detecting the internal reflections and that the presence of such scattering components could be highlighted changing the antennas offset. In particular, results show that the information that can be extracted relevantly changes with the antenna separation, demonstrating that this approach can provide better confidence in the discrimination and recognition process. The proposed bistatic approach aims at exploiting possible presence of internal structure beneath the target, which for landmines means the activation or detonation assemblies and possible internal material diversity, maintaining a limited acquisition effort. Such bistatic configurations are then included in a conceptual design of a highly flexible GPR system capable of searching for landmines across a large variety of terrains, at reasonably low cost and targeting operators safety

    Advanced Techniques for Ground Penetrating Radar Imaging

    Get PDF
    Ground penetrating radar (GPR) has become one of the key technologies in subsurface sensing and, in general, in non-destructive testing (NDT), since it is able to detect both metallic and nonmetallic targets. GPR for NDT has been successfully introduced in a wide range of sectors, such as mining and geology, glaciology, civil engineering and civil works, archaeology, and security and defense. In recent decades, improvements in georeferencing and positioning systems have enabled the introduction of synthetic aperture radar (SAR) techniques in GPR systems, yielding GPR–SAR systems capable of providing high-resolution microwave images. In parallel, the radiofrequency front-end of GPR systems has been optimized in terms of compactness (e.g., smaller Tx/Rx antennas) and cost. These advances, combined with improvements in autonomous platforms, such as unmanned terrestrial and aerial vehicles, have fostered new fields of application for GPR, where fast and reliable detection capabilities are demanded. In addition, processing techniques have been improved, taking advantage of the research conducted in related fields like inverse scattering and imaging. As a result, novel and robust algorithms have been developed for clutter reduction, automatic target recognition, and efficient processing of large sets of measurements to enable real-time imaging, among others. This Special Issue provides an overview of the state of the art in GPR imaging, focusing on the latest advances from both hardware and software perspectives
    corecore