190 research outputs found

    Detection and Identification of Camouflaged Targets using Hyperspectral and LiDAR data

    Get PDF
    Camouflaging is the process of merging the target with the background with the aim to reduce/delay its detection. It can be done using different materials/methods such as camouflaging nets, paints. Defence applications often require quick detection of camouflaged targets in a dynamic battlefield scenario. Though HSI data may facilitate detection of camouflaged targets but detection gets complicated due to issues (spectral variability, dimensionality). This paper presents a framework for detection of camouflaged target that allows military analysts to coordinate and utilise the expert knowledge for resolving camouflaged targets using remotely sensed data. Desired camouflaged target (set of three chairs as a target under a camouflaging net) has been resolved in three steps: First, hyperspectral data processing helps to detect the locations of potential camouflaged targets. It narrows down the location of the potential camouflaged targets by detecting camouflaging net using Independent component analysis and spectral matching algorithms. Second, detection and identification have been performed using LiDAR point cloud classification and morphological analysis. HSI processing helps to discard the redundant majority of LiDAR point clouds and support detailed analysis of only the minute portion of the point cloud data the system deems relevant. This facilitates extraction of salient features of the potential camouflaged target. Lastly, the decisions obtained have been fused to infer the identity of the desired targets. The experimental results indicate that the proposed approach may be used to successfully resolve camouflaged target assuming some a priori knowledge about the morphology of targets likely to be present.

    Multiple Instance Choquet Integral for multiresolution sensor fusion

    Get PDF
    Imagine you are traveling to Columbia, MO for the first time. On your flight to Columbia, the woman sitting next to you recommended a bakery by a large park with a big yellow umbrella outside. After you land, you need directions to the hotel from the airport. Suppose you are driving a rental car, you will need to park your car at a parking lot or a parking structure. After a good night's sleep in the hotel, you may decide to go for a run in the morning on the closest trail and stop by that recommended bakery under a big yellow umbrella. It would be helpful in the course of completing all these tasks to accurately distinguish the proper car route and walking trail, find a parking lot, and pinpoint the yellow umbrella. Satellite imagery and other geo-tagged data such as Open Street Maps provide effective information for this goal. Open Street Maps can provide road information and suggest bakery within a five-mile radius. The yellow umbrella is a distinctive color and, perhaps, is made of a distinctive material that can be identified from a hyperspectral camera. Open Street Maps polygons are tagged with information such as "parking lot" and "sidewalk." All these information can and should be fused to help identify and offer better guidance on the tasks you are completing. Supervised learning methods generally require precise labels for each training data point. It is hard (and probably at an extra cost) to manually go through and label each pixel in the training imagery. GPS coordinates cannot always be fully trusted as a GPS device may only be accurate to the level of several pixels. In many cases, it is practically infeasible to obtain accurate pixel-level training labels to perform fusion for all the imagery and maps available. Besides, the training data may come in a variety of data types, such as imagery or as a 3D point cloud. The imagery may have different resolutions, scales and, even, coordinate systems. Previous fusion methods are generally only limited to data mapped to the same pixel grid, with accurate labels. Furthermore, most fusion methods are restricted to only two sources, even if certain methods, such as pan-sharpening, can deal with different geo-spatial types or data of different resolution. It is, therefore, necessary and important, to come up with a way to perform fusion on multiple sources of imagery and map data, possibly with different resolutions and of different geo-spatial types with consideration of uncertain labels. I propose a Multiple Instance Choquet Integral framework for multi-resolution multisensor fusion with uncertain training labels. The Multiple Instance Choquet Integral (MICI) framework addresses uncertain training labels and performs both classification and regression. Three classifier fusion models, i.e. the noisy-or, min-max, and generalized-mean models, are derived under MICI. The Multi-Resolution Multiple Instance Choquet Integral (MR-MICI) framework is built upon the MICI framework and further addresses multiresolution in the fusion sources in addition to the uncertainty in training labels. For both MICI and MR-MICI, a monotonic normalized fuzzy measure is learned to be used with the Choquet integral to perform two-class classifier fusion given bag-level training labels. An optimization scheme based on the evolutionary algorithm is used to optimize the models proposed. For regression problems where the desired prediction is real-valued, the primary instance assumption is adopted. The algorithms are applied to target detection, regression and scene understanding applications. Experiments are conducted on the fusion of remote sensing data (hyperspectral and LiDAR) over the campus of University of Southern Mississippi - Gulfpark. Clothpanel sub-pixel and super-pixel targets were placed on campus with varying levels of occlusion and the proposed algorithms can successfully detect the targets in the scene. A semi-supervised approach is developed to automatically generate training labels based on data from Google Maps, Google Earth and Open Street Map. Based on such training labels with uncertainty, the proposed algorithms can also identify materials on campus for scene understanding, such as road, buildings, sidewalks, etc. In addition, the algorithms are used for weed detection and real-valued crop yield prediction experiments based on remote sensing data that can provide information for agricultural applications.Includes biblographical reference

    Remote Sensing

    Get PDF
    This dual conception of remote sensing brought us to the idea of preparing two different books; in addition to the first book which displays recent advances in remote sensing applications, this book is devoted to new techniques for data processing, sensors and platforms. We do not intend this book to cover all aspects of remote sensing techniques and platforms, since it would be an impossible task for a single volume. Instead, we have collected a number of high-quality, original and representative contributions in those areas

    PIXEL-BASED LAND COVER CLASSIFICATION BY FUSING HYPERSPECTRAL AND LIDAR DATA

    Get PDF

    Terrain classification using machine learning algorithms in a multi-temporal approach A QGIS plug-in implementation

    Get PDF
    Land cover and land use (LCLU) maps are essential for the successful administration of a nation’s topography, however, conventional on-site data gathering methods are costly and time-consuming. By contrast, remote sensing data can be used to generate up-to-date maps regularly with the help of machine learning algorithms, in turn, allowing for the assessment of a region’s dynamics throughout time. The present dissertation will focus on the implementation of an automated land use and land cover classifier based on remote sensing imagery provided by the mod ern sentinel-2 satellite constellation. The project, with Portugal at its focus, will expand on previous approaches by utilizing temporal data as an input variable in order to harvest the contextual information contained in the vegetation cycles. The pursued solution investigated the implementation of a 9-class classifier plug-in for an industry standard, open-source geographic information system. In the course of the testing procedure, various processing techniques and machine learning algorithms were evaluated in a multi-temporal approach. Resulting in a final overall accuracy of 65,9% across the targeted classes.Mapas de uso e ocupação do solo são cruciais para o entendimento e administração da topografia de uma nação, no entanto, os métodos convencionais de aquisição local de dados são caros e demorados. Contrariamente, dados provenientes de métodos de senso riamento remoto podem ser utilizados para gerar regularmente mapas atualizados com a ajuda de algoritmos de aprendizagem automática. Permitindo, por sua vez, a avaliação da dinâmica de uma região ao longo do tempo. Utilizando como base imagens de sensoriamento remoto fornecidas pela recente cons telação de satélites Sentinel-2, a presente dissertação concentra-se na implementação de um classificador de mapas de uso e ocupação do solo automatizado. O projeto, com foco em Portugal, irá procurar expandir abordagens anteriores através do aproveitamento de informação contextual contida nos ciclos vegetativos pela utilização de dados temporais adicionais. A solução adotada investigou a produção e implementação de um classificador geral de 9 classes num plug-in de um sistema de informação geográfico de código aberto. Durante o processo de teste, diversas técnicas de processamento e múltiplos algoritmos de aprendizagem automática foram avaliados numa abordagem multi-temporal, culminando num resultado final de precisão geral de 65,9% nas classes avaliadas

    Multi-Classifiers And Decision Fusion For Robust Statistical Pattern Recognition With Applications To Hyperspectral Classification

    Get PDF
    In this dissertation, a multi-classifier, decision fusion framework is proposed for robust classification of high dimensional data in small-sample-size conditions. Such datasets present two key challenges. (1) The high dimensional feature spaces compromise the classifiers’ generalization ability in that the classifier tends to overit decision boundaries to the training data. This phenomenon is commonly known as the Hughes phenomenon in the pattern classification community. (2) The small-sample-size of the training data results in ill-conditioned estimates of its statistics. Most classifiers rely on accurate estimation of these statistics for modeling training data and labeling test data, and hence ill-conditioned statistical estimates result in poorer classification performance. This dissertation tests the efficacy of the proposed algorithms to classify primarily remotely sensed hyperspectral data and secondarily diagnostic digital mammograms, since these applications naturally result in very high dimensional feature spaces and often do not have sufficiently large training datasets to support the dimensionality of the feature space. Conventional approaches, such as Stepwise LDA (S-LDA) are sub-optimal, in that they utilize a small subset of the rich spectral information provided by hyperspectral data for classification. In contrast, the approach proposed in this dissertation utilizes the entire high dimensional feature space for classification by identifying a suitable partition of this space, employing a bank-of-classifiers to perform “local” classification over this partition, and then merging these local decisions using an appropriate decision fusion mechanism. Adaptive classifier weight assignment and nonlinear pre-processing (in kernel induced spaces) are also proposed within this framework to improve its robustness over a wide range of fidelity conditions. Experimental results demonstrate that the proposed framework results in significant improvements in classification accuracies (as high as a 12% increase) over conventional approaches

    Multi-Classifiers And Decision Fusion For Robust Statistical Pattern Recognition With Applications To Hyperspectral Classification

    Get PDF
    In this dissertation, a multi-classifier, decision fusion framework is proposed for robust classification of high dimensional data in small-sample-size conditions. Such datasets present two key challenges. (1) The high dimensional feature spaces compromise the classifiers’ generalization ability in that the classifier tends to overit decision boundaries to the training data. This phenomenon is commonly known as the Hughes phenomenon in the pattern classification community. (2) The small-sample-size of the training data results in ill-conditioned estimates of its statistics. Most classifiers rely on accurate estimation of these statistics for modeling training data and labeling test data, and hence ill-conditioned statistical estimates result in poorer classification performance. This dissertation tests the efficacy of the proposed algorithms to classify primarily remotely sensed hyperspectral data and secondarily diagnostic digital mammograms, since these applications naturally result in very high dimensional feature spaces and often do not have sufficiently large training datasets to support the dimensionality of the feature space. Conventional approaches, such as Stepwise LDA (S-LDA) are sub-optimal, in that they utilize a small subset of the rich spectral information provided by hyperspectral data for classification. In contrast, the approach proposed in this dissertation utilizes the entire high dimensional feature space for classification by identifying a suitable partition of this space, employing a bank-of-classifiers to perform “local” classification over this partition, and then merging these local decisions using an appropriate decision fusion mechanism. Adaptive classifier weight assignment and nonlinear pre-processing (in kernel induced spaces) are also proposed within this framework to improve its robustness over a wide range of fidelity conditions. Experimental results demonstrate that the proposed framework results in significant improvements in classification accuracies (as high as a 12% increase) over conventional approaches
    corecore