33 research outputs found

    Non-Parametric Spatial Spectral Band Selection methods

    Get PDF
    © Cranfield University 2021. All rights reserved. No part of this publication may be reproduced without the written permission of the copyright ownerThis project is about the development of band selection (BS) techniques for better target detection and classification in remote sensing and hyperspectral imaging (HSI). Conventionally, this is achieved just by using the spectral features for guiding the band compression. However, this project develops a BS method which uses both spatial and spectral features to allow a handful of crucial spectral bands to be selected for enhancing the target detection and classification performances. This thesis firstly outlines the fundamental concepts and background of remote sensing and HSI, followed by the theories of different atmospheric correction algorithms — in order to assess the reflectance conversion for band selection — and BS techniques, with a detailed explanation of the Hughes principle, which postulates the fundamental drawback for having high-dimensional data in HSI. Subsequently, the thesis highlights the performances of some advanced BS techniques and to point out their deficiencies. Most of the existing BS work in the field have exhibited maximal classification accuracy when more spectral bands have been utilized for classification; this apparently disagrees with the theoretical model of the Hughes phenomenon. The thesis then presents a spatial spectral mutual information (SSMI) BS scheme which utilizes a spatial feature extraction technique as a pre-processing step, followed by the clustering of the mutual information (MI) of spectral bands for enhancing the BS efficiency. Through this BS scheme, a sharp ’bell’-shaped accuracy-dimensionality characteristic has been observed, peaking at about 20 bands. The performance of the proposed SSMI BS scheme has been validated through 6 HSI datasets, and its classification accuracy is shown to be ~10% better than 7 state-of-the-art BS algorithms. These results confirm that the high efficiency of the BS scheme is essentially important to observe, and to validate, the Hughes phenomenon at band selection through experiments for the first time.PH

    Imaging White Blood Cells using a Snapshot Hyper-Spectral Imaging System

    Get PDF
    Automated white blood cell (WBC) counting systems process an extracted whole blood sample and provide a cell count. A step that would not be ideal for onsite screening of individuals in triage or at a security gate. Snapshot Hyper-Spectral imaging systems are capable of capturing several spectral bands simultaneously, offering co-registered images of a target. With appropriate optics, these systems are potentially able to image blood cells in vivo as they flow through a vessel, eliminating the need for a blood draw and sample staining. Our group has evaluated the capability of a commercial Snapshot Hyper-Spectral imaging system, specifically the Arrow system from Rebellion Photonics, in differentiating between white and red blood cells on unstained and sealed blood smear slides. We evaluated the imaging capabilities of this hyperspectral camera as a platform to build an automated blood cell counting system. Hyperspectral data consisting of 25, 443x313 hyperspectral bands with ~3nm spacing were captured over the range of 419 to 494nm. Open-source hyperspectral datacube analysis tools, used primarily in Geographic Information Systems (GIS) applications, indicate that white blood cells\u27 features are most prominent in the 428-442nm band for blood samples viewed under 20x and 50x magnification over a varying range of illumination intensities. The system has shown to successfully segment blood cells based on their spectral-spatial information. These images could potentially be used in subsequent automated white blood cell segmentation and counting algorithms for performing in vivo white blood cell counting

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Spectral-spatial Feature Extraction for Hyperspectral Image Classification

    Get PDF
    As an emerging technology, hyperspectral imaging provides huge opportunities in both remote sensing and computer vision. The advantage of hyperspectral imaging comes from the high resolution and wide range in the electromagnetic spectral domain which reflects the intrinsic properties of object materials. By combining spatial and spectral information, it is possible to extract more comprehensive and discriminative representation for objects of interest than traditional methods, thus facilitating the basic pattern recognition tasks, such as object detection, recognition, and classification. With advanced imaging technologies gradually available for universities and industry, there is an increased demand to develop new methods which can fully explore the information embedded in hyperspectral images. In this thesis, three spectral-spatial feature extraction methods are developed for salient object detection, hyperspectral face recognition, and remote sensing image classification. Object detection is an important task for many applications based on hyperspectral imaging. While most traditional methods rely on the pixel-wise spectral response, many recent efforts have been put on extracting spectral-spatial features. In the first approach, we extend Itti's visual saliency model to the spectral domain and introduce the spectral-spatial distribution based saliency model for object detection. This procedure enables the extraction of salient spectral features in the scale space, which is related to the material property and spatial layout of objects. Traditional 2D face recognition has been studied for many years and achieved great success. Nonetheless, there is high demand to explore unrevealed information other than structures and textures in spatial domain in faces. Hyperspectral imaging meets such requirements by providing additional spectral information on objects, in completion to the traditional spatial features extracted in 2D images. In the second approach, we propose a novel 3D high-order texture pattern descriptor for hyperspectral face recognition, which effectively exploits both spatial and spectral features in hyperspectral images. Based on the local derivative pattern, our method encodes hyperspectral faces with multi-directional derivatives and binarization function in spectral-spatial space. Compared to traditional face recognition methods, our method can describe distinctive micro-patterns which integrate the spatial and spectral information of faces. Mathematical morphology operations are limited to extracting spatial feature in two-dimensional data and cannot cope with hyperspectral images due to so-called ordering problem. In the third approach, we propose a novel multi-dimensional morphology descriptor, tensor morphology profile~(TMP), for hyperspectral image classification. TMP is a general framework to extract multi-dimensional structures in high-dimensional data. The n-order morphology profile is proposed to work with the n-order tensor, which can capture the inner high order structures. By treating a hyperspectral image as a tensor, it is possible to extend the morphology to high dimensional data so that powerful morphological tools can be used to analyze hyperspectral images with fused spectral-spatial information. At last, we discuss the sampling strategy for the evaluation of spectral-spatial methods in remote sensing hyperspectral image classification. We find that traditional pixel-based random sampling strategy for spectral processing will lead to unfair or biased performance evaluation in the spectral-spatial processing context. When training and testing samples are randomly drawn from the same image, the dependence caused by overlap between them may be artificially enhanced by some spatial processing methods. It is hard to determine whether the improvement of classification accuracy is caused by incorporating spatial information into the classifier or by increasing the overlap between training and testing samples. To partially solve this problem, we propose a novel controlled random sampling strategy for spectral-spatial methods. It can significantly reduce the overlap between training and testing samples and provides more objective and accurate evaluation

    Object detection and classification in aerial hyperspectral imagery using a multivariate hit-or-miss transform

    Get PDF
    High resolution aerial and satellite borne hyperspectral imagery provides a wealth of information about an imaged scene allowing for many earth observation applications to be investigated. Such applications include geological exploration, soil characterisation, land usage, change monitoring as well as military applications such as anomaly and target detection. While this sheer volume of data provides an invaluable resource, with it comes the curse of dimensionality and the necessity for smart processing techniques as analysing this large quantity of data can be a lengthy and problematic task. In order to aid this analysis dimensionality reduction techniques can be employed to simplify the task by reducing the volume of data and describing it (or most of it) in an alternate way. This work aims to apply this notion of dimensionality reduction based hyperspectral analysis to target detection using a multivariate Percentage Occupancy Hit or Miss Transform that detects objects based on their size shape and spectral properties. We also investigate the effects of noise and distortion and how incorporating these factors in the design of necessary structuring elements allows for a more accurate representation of the desired targets and therefore a more accurate detection. We also compare our method with various other common Target Detection and Anomaly Detection techniques

    Application of Multi-Sensor Fusion Technology in Target Detection and Recognition

    Get PDF
    Application of multi-sensor fusion technology has drawn a lot of industrial and academic interest in recent years. The multi-sensor fusion methods are widely used in many applications, such as autonomous systems, remote sensing, video surveillance, and the military. These methods can obtain the complementary properties of targets by considering multiple sensors. On the other hand, they can achieve a detailed environment description and accurate detection of interest targets based on the information from different sensors.This book collects novel developments in the field of multi-sensor, multi-source, and multi-process information fusion. Articles are expected to emphasize one or more of the three facets: architectures, algorithms, and applications. Published papers dealing with fundamental theoretical analyses, as well as those demonstrating their application to real-world problems

    A comprehensive review of 3D convolutional neural network-based classification techniques of diseased and defective crops using non-UAV-based hyperspectral images

    Full text link
    Hyperspectral imaging (HSI) is a non-destructive and contactless technology that provides valuable information about the structure and composition of an object. It can capture detailed information about the chemical and physical properties of agricultural crops. Due to its wide spectral range, compared with multispectral- or RGB-based imaging methods, HSI can be a more effective tool for monitoring crop health and productivity. With the advent of this imaging tool in agrotechnology, researchers can more accurately address issues related to the detection of diseased and defective crops in the agriculture industry. This allows to implement the most suitable and accurate farming solutions, such as irrigation and fertilization before crops enter a damaged and difficult-to-recover phase of growth in the field. While HSI provides valuable insights into the object under investigation, the limited number of HSI datasets for crop evaluation presently poses a bottleneck. Dealing with the curse of dimensionality presents another challenge due to the abundance of spectral and spatial information in each hyperspectral cube. State-of-the-art methods based on 1D- and 2D-CNNs struggle to efficiently extract spectral and spatial information. On the other hand, 3D-CNN-based models have shown significant promise in achieving better classification and detection results by leveraging spectral and spatial features simultaneously. Despite the apparent benefits of 3D-CNN-based models, their usage for classification purposes in this area of research has remained limited. This paper seeks to address this gap by reviewing 3D-CNN-based architectures and the typical deep learning pipeline, including preprocessing and visualization of results, for the classification of hyperspectral images of diseased and defective crops. Furthermore, we discuss open research areas and challenges when utilizing 3D-CNNs with HSI data
    corecore