7 research outputs found

    Object Detection in High Resolution Aerial Images and Hyperspectral Remote Sensing Images

    Get PDF
    With rapid developments in satellite and sensor technologies, there has been a dramatic increase in the availability of remotely sensed images. However, the exploration of these images still involves a tremendous amount of human interventions, which are tedious, time-consuming, and inefficient. To help imaging experts gain a complete understanding of the images and locate the objects of interest in a more accurate and efficient way, there is always an urgent need for developing automatic detection algorithms. In this work, we delve into the object detection problems in remote sensing applications, exploring the detection algorithms for both hyperspectral images (HSIs) and high resolution aerial images. In the first part, we focus on the subpixel target detection problem in HSIs with low spatial resolutions, where the objects of interest are much smaller than the image pixel spatial resolution. To this end, we explore the detection frameworks that integrate image segmentation techniques in designing the matched filters (MFs). In particular, we propose a novel image segmentation algorithm to identify the spatial-spectral coherent image regions, from which the background statistics were estimated for deriving the MFs. Extensive experimental studies were carried out to demonstrate the advantages of the proposed subpixel target detection framework. Our studies show the superiority of the approach when comparing to state-of-the-art methods. The second part of the thesis explores the object based image analysis (OBIA) framework for geospatial object detection in high resolution aerial images. Specifically, we generate a tree representation of the aerial images from the output of hierarchical image segmentation algorithms and reformulate the object detection problem into a tree matching task. We then proposed two tree-matching algorithms for the object detection framework. We demonstrate the efficiency and effectiveness of the proposed tree-matching based object detection framework. In the third part, we study object detection in high resolution aerial images from a machine learning perspective. We investigate both traditional machine learning based framework and end-to-end convolutional neural network (CNN) based approach for various object detection tasks. In the traditional detection framework, we propose to apply the Gaussian process classifier (GPC) to train an object detector and demonstrate the advantages of the probabilistic classification algorithm. In the CNN based approach, we proposed a novel scale transfer module that generates enhanced feature maps for object detection. Our results show the efficiency and competitiveness of the proposed algorithms when compared to state-of-the-art counterparts

    One-step Generalized Likelihood Ratio Test for Subpixel Target Detection in Hyperspectral Imaging

    Get PDF
    Abstract—One of the main objectives of hyperspectral image processing is to detect a given target among an unknown background. The standard data to conduct such a detection is a reflectance map, where the spectral signatures of each pixel’s components, known as endmembers, are associated with their abundances in the pixel. Due to the low spatial resolution of most hyperspectral sensors, such a target occupies a fraction of the pixel. A widely used model in case of subpixel targets is the replacement model. Among the vast number of possible detectors, algorithms matched to the replacement model are quite rare. One of the few examples is the Finite Target Matched Filter, which is an adjustment of the well-known Matched Filter. In this paper, we derive the exact Generalized Likelihood Ratio Test for this model. This new detector can be used both with a local covariance estimation window or a global one. It is shown to outperform the standard target detectors on real data, especially for small covariance estimation windows

    Multiple Instance Choquet Integral for multiresolution sensor fusion

    Get PDF
    Imagine you are traveling to Columbia, MO for the first time. On your flight to Columbia, the woman sitting next to you recommended a bakery by a large park with a big yellow umbrella outside. After you land, you need directions to the hotel from the airport. Suppose you are driving a rental car, you will need to park your car at a parking lot or a parking structure. After a good night's sleep in the hotel, you may decide to go for a run in the morning on the closest trail and stop by that recommended bakery under a big yellow umbrella. It would be helpful in the course of completing all these tasks to accurately distinguish the proper car route and walking trail, find a parking lot, and pinpoint the yellow umbrella. Satellite imagery and other geo-tagged data such as Open Street Maps provide effective information for this goal. Open Street Maps can provide road information and suggest bakery within a five-mile radius. The yellow umbrella is a distinctive color and, perhaps, is made of a distinctive material that can be identified from a hyperspectral camera. Open Street Maps polygons are tagged with information such as "parking lot" and "sidewalk." All these information can and should be fused to help identify and offer better guidance on the tasks you are completing. Supervised learning methods generally require precise labels for each training data point. It is hard (and probably at an extra cost) to manually go through and label each pixel in the training imagery. GPS coordinates cannot always be fully trusted as a GPS device may only be accurate to the level of several pixels. In many cases, it is practically infeasible to obtain accurate pixel-level training labels to perform fusion for all the imagery and maps available. Besides, the training data may come in a variety of data types, such as imagery or as a 3D point cloud. The imagery may have different resolutions, scales and, even, coordinate systems. Previous fusion methods are generally only limited to data mapped to the same pixel grid, with accurate labels. Furthermore, most fusion methods are restricted to only two sources, even if certain methods, such as pan-sharpening, can deal with different geo-spatial types or data of different resolution. It is, therefore, necessary and important, to come up with a way to perform fusion on multiple sources of imagery and map data, possibly with different resolutions and of different geo-spatial types with consideration of uncertain labels. I propose a Multiple Instance Choquet Integral framework for multi-resolution multisensor fusion with uncertain training labels. The Multiple Instance Choquet Integral (MICI) framework addresses uncertain training labels and performs both classification and regression. Three classifier fusion models, i.e. the noisy-or, min-max, and generalized-mean models, are derived under MICI. The Multi-Resolution Multiple Instance Choquet Integral (MR-MICI) framework is built upon the MICI framework and further addresses multiresolution in the fusion sources in addition to the uncertainty in training labels. For both MICI and MR-MICI, a monotonic normalized fuzzy measure is learned to be used with the Choquet integral to perform two-class classifier fusion given bag-level training labels. An optimization scheme based on the evolutionary algorithm is used to optimize the models proposed. For regression problems where the desired prediction is real-valued, the primary instance assumption is adopted. The algorithms are applied to target detection, regression and scene understanding applications. Experiments are conducted on the fusion of remote sensing data (hyperspectral and LiDAR) over the campus of University of Southern Mississippi - Gulfpark. Clothpanel sub-pixel and super-pixel targets were placed on campus with varying levels of occlusion and the proposed algorithms can successfully detect the targets in the scene. A semi-supervised approach is developed to automatically generate training labels based on data from Google Maps, Google Earth and Open Street Map. Based on such training labels with uncertainty, the proposed algorithms can also identify materials on campus for scene understanding, such as road, buildings, sidewalks, etc. In addition, the algorithms are used for weed detection and real-valued crop yield prediction experiments based on remote sensing data that can provide information for agricultural applications.Includes biblographical reference

    Schroedinger Eigenmaps for Manifold Alignment of Multimodal Hyperspectral Images

    Get PDF
    Multimodal remote sensing is an upcoming field as it allows for many views of the same region of interest. Domain adaption attempts to fuse these multimodal remotely sensed images by utilizing the concept of transfer learning to understand data from different sources to learn a fused outcome. Semisupervised Manifold Alignment (SSMA) maps multiple Hyperspectral images (HSIs) from high dimensional source spaces to a low dimensional latent space where similar elements reside closely together. SSMA preserves the original geometric structure of respective HSIs whilst pulling similar data points together and pushing dissimilar data points apart. The SSMA algorithm is comprised of a geometric component, a similarity component and dissimilarity component. The geometric component of the SSMA method has roots in the original Laplacian Eigenmaps (LE) dimension reduction algorithm and the projection functions have roots in the original Locality Preserving Projections (LPP) dimensionality reduction framework. The similarity and dissimilarity component is a semisupervised component that allows expert labeled information to improve the image fusion process. Spatial-Spectral Schroedinger Eigenmaps (SSSE) was designed as a semisupervised enhancement to the LE algorithm by augmenting the Laplacian matrix with a user-defined potential function. However, the user-defined enhancement has yet to be explored in the LPP framework. The first part of this thesis proposes to use the Spatial-Spectral potential within the LPP algorithm, creating a new algorithm we call the Schroedinger Eigenmap Projections (SEP). Through experiments on publicly available data with expert-labeled ground truth, we perform experiments to compare the performance of the SEP algorithm with respect to the LPP algorithm. The second part of this thesis proposes incorporating the Spatial Spectral potential from SSSE into the SSMA framework. Using two multi-angled HSI’s, we explore the impact of incorporating this potential into SSMA

    Deep Learning Methods for Remote Sensing

    Get PDF
    Remote sensing is a field where important physical characteristics of an area are exacted using emitted radiation generally captured by satellite cameras, sensors onboard aerial vehicles, etc. Captured data help researchers develop solutions to sense and detect various characteristics such as forest fires, flooding, changes in urban areas, crop diseases, soil moisture, etc. The recent impressive progress in artificial intelligence (AI) and deep learning has sparked innovations in technologies, algorithms, and approaches and led to results that were unachievable until recently in multiple areas, among them remote sensing. This book consists of sixteen peer-reviewed papers covering new advances in the use of AI for remote sensing

    Texture and Colour in Image Analysis

    Get PDF
    Research in colour and texture has experienced major changes in the last few years. This book presents some recent advances in the field, specifically in the theory and applications of colour texture analysis. This volume also features benchmarks, comparative evaluations and reviews

    Hyperspectral targetdetection by using superpixels and signature based methods

    No full text
    Spectral signature based methods which form the mainstream in hyperspectral target detection can be classified mainly in three categories as the methods using background modeling, subspace projection based methods, and hybrid methods merging linear unmixing with background estimation. A common characteristic of all these methods is to classify each pixel of the hyperspectral image as a target or background while ignoring the spatial relations between neighbor pixels. Integration of contextual information defined over neighboring relations can, however, suppress the noise on individual pixels and yield better detection. The proposed methodology in this paper adapts the usage of superpixels defined over neighboring relations to the mentioned three classes of target detection algorithms. In particular, ACE, DTDCA and HUD algorithms are selected for each class for the proposed methodology. The experiments reveal that using superpixels for target detection improves the detection performances compared to the baseline methods using only pixels
    corecore