16 research outputs found

    Signal Processing Techniques for Landmine Detection Using Impulse Ground Penetrating Radar (ImGPR)

    Get PDF
    Landmines and unexploded ordinance (UXO) are laid during a conflict against enemy forces. However, they kill or maim civilians decades after the conflict has ended. There are more than 110 million landmines actively lodged in the globe. Every year more than 26,000 innocent civilians are killed or maimed. Most modern landmines are mainly nonmetallic or plastic, which are difficult to be detected using conventional metal detectors. Detection using hand-held prodding is a slow and expensive process. Impulse Ground Penetrating Radar (ImGPR) is a nondestructive technique capable of detecting shallowly buried nonmetallic anti-personnel (AP) and anti-tank (AT) landmines. In this PhD thesis, ImGPR is considered as a tool to detect landmines and UXO. The presence of strong ground clutter and noise degrade the performance of GPR. Hence, using a GPR sensor is almost impossible without the application of sophisticated signal processing. In electromagnetic wave propagation modeling, a multilayer transmission line technique is applied. It considers different soil types at different moisture levels. Plastic targets of different diameters are buried at different depths. The modeled signal is then used to estimate the ground and buried target parameters. In a parameter estimation procedure, a surface reflection parameter method (SRPM) is applied. Signal processing algorithms are implemented for clutter reduction and decision making purposes. Attention is mainly given to the development of techniques, that are applicable to real-time landmine detection. Advanced techniques are preceded by elementary preprocessing techniques, which are useful for signal correction and noise reduction. Background subtraction techniques based on multilayer modeling, spatial filtering and adaptive background subtraction are implemented. In addition to that, decorrelation and symmetry filtering techniques are also investigated. In the correlated decision fusion framework, local decisions are transmitted to the fusion center so as to compute a global decision. In this case, the concept of confidence information of local decisions is crucial to obtain acceptable detection results. The Bahadur-Lazarsfeld and Chow expansions are used to estimate the joint probability density function of the correlated decisions. Furthermore, a decision fusion based on fuzzy set is implemented. All proposed methods are evaluated using simulated as well as real GPR data measurements of many scenarios. The real data collection campaign took place at the Griesheim old airport and Botanischer Garten, Darmstadt, Germany in July 2011

    Mine Action: Lessons and Challenges

    Get PDF
    Mine Action: Lessons and Challenges represents the views of selected experts as to what some of the key lessons have been, and what challenges remain for the future. Following an Executive Summary of its main conclusions and findings, this work is laid out in two parts. Part I looks at the core activities — the “pillars” — of mine action: advocacy, victim assistance, mine risk education, demining (survey, marking and clearance of mines and unexploded ordnance) and stockpile destruction. Part II looks at key management issues, specifically, programme coordination and management, information management and capacity development. This work concludes with a thought-provoking assessment of what mine action has actually achieved

    A generic framework for context-dependent fusion with application to landmine detection.

    Get PDF
    For complex detection and classification problems, involving data with large intra-class variations and noisy inputs, no single source of information can provide a satisfactory solution. As a result, combination of multiple classifiers is playing an increasing role in solving these complex pattern recognition problems, and has proven to be a viable alternative to using a single classifier. Over the past few years, a variety of schemes have been proposed for combining multiple classifiers. Most of these were global as they assign a degree of worthiness to each classifier, that is averaged over the entire training data. This may not be the optimal way to combine the different experts since the behavior of each one may not be uniform over the different regions of the feature space. To overcome this issue, few local methods have been proposed in the last few years. Local fusion methods aim to adapt the classifiers\u27 worthiness to different regions of the feature space. First, they partition the input samples. Then, they identify the best classifier for each partition and designate it as the expert for that partition. Unfortunately, current local methods are either computationally expensive and/or perform these two tasks independently of each other. However, feature space partition and algorithm selection are not independent and their optimization should be simultaneous. In this dissertation, we introduce a new local fusion approach, called Context Extraction for Local Fusion (CELF). CELF was designed to adapt the fusion to different regions of the feature space. It takes advantage of the strength of the different experts and overcome their limitations. First, we describe the baseline CELF algorithm. We formulate a novel objective function that combines context identification and multi-algorithm fusion criteria into a joint objective function. The context identification component thrives to partition the input feature space into different clusters (called contexts), while the fusion component thrives to learn the optimal fusion parameters within each cluster. Second, we propose several variations of CELF to deal with different applications scenario. In particular, we propose an extension that includes a feature discrimination component (CELF-FD). This version is advantageous when dealing with high dimensional feature spaces and/or when the number of features extracted by the individual algorithms varies significantly. CELF-CA is another extension of CELF that adds a regularization term to the objective function to introduce competition among the clusters and to find the optimal number of clusters in an unsupervised way. CELF-CA starts by partitioning the data into a large number of small clusters. As the algorithm progresses, adjacent clusters compete for data points, and clusters that lose the competition gradually become depleted and vanish. Third, we propose CELF-M that generalizes CELF to support multiple classes data sets. The baseline CELF and its extensions were formulated to use linear aggregation to combine the output of the different algorithms within each context. For some applications, this can be too restrictive and non-linear fusion may be needed. To address this potential drawback, we propose two other variations of CELF that use non-linear aggregation. The first one is based on Neural Networks (CELF-NN) and the second one is based on Fuzzy Integrals (CELF-FI). The latter one has the desirable property of assigning weights to subsets of classifiers to take into account the interaction between them. To test a new signature using CELF (or its variants), each algorithm would extract its set of features and assigns a confidence value. Then, the features are used to identify the best context, and the fusion parameters of this context are used to fuse the individual confidence values. For each variation of CELF, we formulate an objective function, derive the necessary conditions to optimize it, and construct an iterative algorithm. Then we use examples to illustrate the behavior of the algorithm, compare it to global fusion, and highlight its advantages. We apply our proposed fusion methods to the problem of landmine detection. We use data collected using Ground Penetration Radar (GPR) and Wideband Electro -Magnetic Induction (WEMI) sensors. We show that CELF (and its variants) can identify meaningful and coherent contexts (e.g. mines of same type, mines buried at the same site, etc.) and that different expert algorithms can be identified for the different contexts. In addition to the land mine detection application, we apply our approaches to semantic video indexing, image database categorization, and phoneme recognition. In all applications, we compare the performance of CELF with standard fusion methods, and show that our approach outperforms all these methods

    Feature and Decision Level Fusion Using Multiple Kernel Learning and Fuzzy Integrals

    Get PDF
    The work collected in this dissertation addresses the problem of data fusion. In other words, this is the problem of making decisions (also known as the problem of classification in the machine learning and statistics communities) when data from multiple sources are available, or when decisions/confidence levels from a panel of decision-makers are accessible. This problem has become increasingly important in recent years, especially with the ever-increasing popularity of autonomous systems outfitted with suites of sensors and the dawn of the ``age of big data.\u27\u27 While data fusion is a very broad topic, the work in this dissertation considers two very specific techniques: feature-level fusion and decision-level fusion. In general, the fusion methods proposed throughout this dissertation rely on kernel methods and fuzzy integrals. Both are very powerful tools, however, they also come with challenges, some of which are summarized below. I address these challenges in this dissertation. Kernel methods for classification is a well-studied area in which data are implicitly mapped from a lower-dimensional space to a higher-dimensional space to improve classification accuracy. However, for most kernel methods, one must still choose a kernel to use for the problem. Since there is, in general, no way of knowing which kernel is the best, multiple kernel learning (MKL) is a technique used to learn the aggregation of a set of valid kernels into a single (ideally) superior kernel. The aggregation can be done using weighted sums of the pre-computed kernels, but determining the summation weights is not a trivial task. Furthermore, MKL does not work well with large datasets because of limited storage space and prediction speed. These challenges are tackled by the introduction of many new algorithms in the following chapters. I also address MKL\u27s storage and speed drawbacks, allowing MKL-based techniques to be applied to big data efficiently. Some algorithms in this work are based on the Choquet fuzzy integral, a powerful nonlinear aggregation operator parameterized by the fuzzy measure (FM). These decision-level fusion algorithms learn a fuzzy measure by minimizing a sum of squared error (SSE) criterion based on a set of training data. The flexibility of the Choquet integral comes with a cost, however---given a set of N decision makers, the size of the FM the algorithm must learn is 2N. This means that the training data must be diverse enough to include 2N independent observations, though this is rarely encountered in practice. I address this in the following chapters via many different regularization functions, a popular technique in machine learning and statistics used to prevent overfitting and increase model generalization. Finally, it is worth noting that the aggregation behavior of the Choquet integral is not intuitive. I tackle this by proposing a quantitative visualization strategy allowing the FM and Choquet integral behavior to be shown simultaneously

    FY10 Engineering Innovations, Research and Technology Report

    Full text link

    Análisis de la incidencia del tratado de Otawwa en el proceso de desminado humanitario en Nicaragua, en el período de 1997 - 2007

    Get PDF
    Tesis (Licenciatura en Diplomacia y Relaciones Internacionales)--Universidad Americana, Managua, 2009La presente tesis se basa en un Análisis de la Incidencia del Tratado de Ottawa en el Proceso de Desminado Humanitario en Nicaragua, en el Periodo de 1997 – 2007. La mina antipersonal es todo artefacto diseñado para que explote por la presencia, la proximidad o el contacto con una persona, incapacitando o matando a más de una, y está diseñada para ser colocada debajo, sobre o cerca de la superficie, el tratado de Ottawa apoyó las propuestas que tenían como fin el prohibir el uso de minas antipersonales haciendo que Nicaragua diera un paso más hacia la paz y prosperidad humana

    Efficient Data Driven Multi Source Fusion

    Get PDF
    Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification
    corecore