250 research outputs found

    Exploring Hyperspectral Anomaly Detection with Human Vision: A Small Target Aware Detector

    Full text link
    Hyperspectral anomaly detection (HAD) aims to localize pixel points whose spectral features differ from the background. HAD is essential in scenarios of unknown or camouflaged target features, such as water quality monitoring, crop growth monitoring and camouflaged target detection, where prior information of targets is difficult to obtain. Existing HAD methods aim to objectively detect and distinguish background and anomalous spectra, which can be achieved almost effortlessly by human perception. However, the underlying processes of human visual perception are thought to be quite complex. In this paper, we analyze hyperspectral image (HSI) features under human visual perception, and transfer the solution process of HAD to the more robust feature space for the first time. Specifically, we propose a small target aware detector (STAD), which introduces saliency maps to capture HSI features closer to human visual perception. STAD not only extracts more anomalous representations, but also reduces the impact of low-confidence regions through a proposed small target filter (STF). Furthermore, considering the possibility of HAD algorithms being applied to edge devices, we propose a full connected network to convolutional network knowledge distillation strategy. It can learn the spectral and spatial features of the HSI while lightening the network. We train the network on the HAD100 training set and validate the proposed method on the HAD100 test set. Our method provides a new solution space for HAD that is closer to human visual perception with high confidence. Sufficient experiments on real HSI with multiple method comparisons demonstrate the excellent performance and unique potential of the proposed method. The code is available at https://github.com/majitao-xd/STAD-HAD

    Sparse representation based hyperspectral image compression and classification

    Get PDF
    Abstract This thesis presents a research work on applying sparse representation to lossy hyperspectral image compression and hyperspectral image classification. The proposed lossy hyperspectral image compression framework introduces two types of dictionaries distinguished by the terms sparse representation spectral dictionary (SRSD) and multi-scale spectral dictionary (MSSD), respectively. The former is learnt in the spectral domain to exploit the spectral correlations, and the latter in wavelet multi-scale spectral domain to exploit both spatial and spectral correlations in hyperspectral images. To alleviate the computational demand of dictionary learning, either a base dictionary trained offline or an update of the base dictionary is employed in the compression framework. The proposed compression method is evaluated in terms of different objective metrics, and compared to selected state-of-the-art hyperspectral image compression schemes, including JPEG 2000. The numerical results demonstrate the effectiveness and competitiveness of both SRSD and MSSD approaches. For the proposed hyperspectral image classification method, we utilize the sparse coefficients for training support vector machine (SVM) and k-nearest neighbour (kNN) classifiers. In particular, the discriminative character of the sparse coefficients is enhanced by incorporating contextual information using local mean filters. The classification performance is evaluated and compared to a number of similar or representative methods. The results show that our approach could outperform other approaches based on SVM or sparse representation. This thesis makes the following contributions. It provides a relatively thorough investigation of applying sparse representation to lossy hyperspectral image compression. Specifically, it reveals the effectiveness of sparse representation for the exploitation of spectral correlations in hyperspectral images. In addition, we have shown that the discriminative character of sparse coefficients can lead to superior performance in hyperspectral image classification.EM201

    Detecting anomalies in remotely sensed hyperspectral signatures via wavelet transforms

    Full text link
    An automated subpixel target detection system has been designed and tested for use with remotely sensed hyperspectral images. A database of hyperspectral signatures was created to test the system using a variety of Gaussian shaped targets. The signal-to-noise ratio of the targets varied from -95dB to -50dB. The system utilizes a wavelet-based method (discrete wavelet transform) to extract an energy feature vector from each input pixel signature. The dimensionality of the feature vector is reduced to a one-dimensional feature scalar through the process of linear discriminant analysis. Signature classification is determined by nearest mean criterion that is used to assign each input signature to one of two classes, no target present or target present. Classification accuracy ranged from nearly 60% with target SNR at -95dB without any a priori knowledge of the target, to 100% with target SNR at -50dB and a priori knowledge about the location of the target within the spectral bands of the signature

    Hyperspectral Imaging for Landmine Detection

    Get PDF
    This PhD thesis aims at investigating the possibility to detect landmines using hyperspectral imaging. Using this technology, we are able to acquire at each pixel of the image spectral data in hundreds of wavelengths. So, at each pixel we obtain a reflectance spectrum that is used as fingerprint to identify the materials in each pixel, and mainly in our project help us to detect the presence of landmines. The proposed process works as follows: a preconfigured drone (hexarotor or octorotor) will carry the hyperspectral camera. This programmed drone is responsible of flying over the contaminated area in order to take images from a safe distance. Various image processing techniques will be used to treat the image in order to isolate the landmine from the surrounding. Once the presence of a mine or explosives is suspected, an alarm signal is sent to the base station giving information about the type of the mine, its location and the clear path that could be taken by the mine removal team in order to disarm the mine. This technology has advantages over the actually used techniques: • It is safer because it limits the need of humans in the searching process and gives the opportunity to the demining team to detect the mines while they are in a safe region. • It is faster. A larger area could be cleared in a single day by comparison with demining techniques • This technique can be used to detect at the same time objects other than mines such oil or minerals. First, a presentation of the problem of landmines that is expanding worldwide referring to some statistics from the UN organizations is provided. In addition, a brief presentation of different types of landmines is shown. Unfortunately, new landmines are well camouflaged and are mainly made of plastic in order to make their detection using metal detectors harder. A summary of all landmine detection techniques is shown to give an idea about the advantages and disadvantages of each technique. In this work, we give an overview of different projects that worked on the detection of landmines using hyperspectral imaging. We will show the main results achieved in this field and future work to be done in order to make this technology effective. Moreover, we worked on different target detection algorithms in order to achieve high probability of detection with low false alarm rate. We tested different statistical and linear unmixing based methods. In addition, we introduced the use of radial basis function neural networks in order to detect landmines at subpixel level. A comparative study between different detection methods will be shown in the thesis. A study of the effect of dimensionality reduction using principal component analysis prior to classification is also provided. The study shows the dependency between the two steps (feature extraction and target detection). The selection of target detection algorithm will define if feature extraction in previous phase is necessary. A field experiment has been done in order to study how the spectral signature of landmine will change depending on the environment in which the mine is planted. For this, we acquired the spectral signature of 6 types of landmines in different conditions: in Lab where specific source of light is used; in field where mines are covered by grass; and when mines are buried in soil. The results of this experiment are very interesting. The signature of two types of landmines are used in the simulations. They are a database necessary for supervised detection of landmines. Also we extracted some spectral characteristics of landmines that would help us to distinguish mines from background

    A Survey on Deep Learning based Time Series Analysis with Frequency Transformation

    Full text link
    Recently, frequency transformation (FT) has been increasingly incorporated into deep learning models to significantly enhance state-of-the-art accuracy and efficiency in time series analysis. The advantages of FT, such as high efficiency and a global view, have been rapidly explored and exploited in various time series tasks and applications, demonstrating the promising potential of FT as a new deep learning paradigm for time series analysis. Despite the growing attention and the proliferation of research in this emerging field, there is currently a lack of a systematic review and in-depth analysis of deep learning-based time series models with FT. It is also unclear why FT can enhance time series analysis and what its limitations in the field are. To address these gaps, we present a comprehensive review that systematically investigates and summarizes the recent research advancements in deep learning-based time series analysis with FT. Specifically, we explore the primary approaches used in current models that incorporate FT, the types of neural networks that leverage FT, and the representative FT-equipped models in deep time series analysis. We propose a novel taxonomy to categorize the existing methods in this field, providing a structured overview of the diverse approaches employed in incorporating FT into deep learning models for time series analysis. Finally, we highlight the advantages and limitations of FT for time series modeling and identify potential future research directions that can further contribute to the community of time series analysis

    Robust Normalized Softmax Loss for Deep Metric Learning-Based Characterization of Remote Sensing Images With Label Noise

    Get PDF
    Most deep metric learning-based image characterization methods exploit supervised information to model the semantic relations among the remote sensing (RS) scenes. Nonetheless, the unprecedented availability of large-scale RS data makes the annotation of such images very challenging, requiring automated supportive processes. Whether the annotation is assisted by aggregation or crowd-sourcing, the RS large-variance problem, together with other important factors [e.g., geo-location/registration errors, land-cover changes, even low-quality Volunteered Geographic Information (VGI), etc.] often introduce the so-called label noise, i.e., semantic annotation errors. In this article, we first investigate the deep metric learning-based characterization of RS images with label noise and propose a novel loss formulation, named robust normalized softmax loss (RNSL), for robustly learning the metrics among RS scenes. Specifically, our RNSL improves the robustness of the normalized softmax loss (NSL), commonly utilized for deep metric learning, by replacing its logarithmic function with the negative Box–Cox transformation in order to down-weight the contributions from noisy images on the learning of the corresponding class prototypes. Moreover, by truncating the loss with a certain threshold, we also propose a truncated robust normalized softmax loss (t-RNSL) which can further enforce the learning of class prototypes based on the image features with high similarities between them, so that the intraclass features can be well grouped and interclass features can be well separated. Our experiments, conducted on two benchmark RS data sets, validate the effectiveness of the proposed approach with respect to different state-of-the-art methods in three different downstream applications (classification, clustering, and retrieval). The codes of this article will be publicly available from https://github.com/jiankang1991

    Sustainable Agriculture and Advances of Remote Sensing (Volume 2)

    Get PDF
    Agriculture, as the main source of alimentation and the most important economic activity globally, is being affected by the impacts of climate change. To maintain and increase our global food system production, to reduce biodiversity loss and preserve our natural ecosystem, new practices and technologies are required. This book focuses on the latest advances in remote sensing technology and agricultural engineering leading to the sustainable agriculture practices. Earth observation data, in situ and proxy-remote sensing data are the main source of information for monitoring and analyzing agriculture activities. Particular attention is given to earth observation satellites and the Internet of Things for data collection, to multispectral and hyperspectral data analysis using machine learning and deep learning, to WebGIS and the Internet of Things for sharing and publication of the results, among others

    Spectral image utility for target detection applications

    Get PDF
    In a wide range of applications, images convey useful information about scenes. The “utility” of an image is defined with reference to the specific task that an observer seeks to accomplish, and differs from the “fidelity” of the image, which seeks to capture the ability of the image to represent the true nature of the scene. In remote sensing of the earth, various means of characterizing the utility of satellite and airborne imagery have evolved over the years. Recent advances in the imaging modality of spectral imaging have enabled synoptic views of the earth at many finely sampled wavelengths over a broad spectral band. These advances challenge the ability of traditional earth observation image utility metrics to describe the rich information content of spectral images. Traditional approaches to image utility that are based on overhead panchromatic image interpretability by a human observer are not applicable to spectral imagery, which requires automated processing. This research establishes the context for spectral image utility by reviewing traditional approaches and current methods for describing spectral image utility. It proposes a new approach to assessing and predicting spectral image utility for the specific application of target detection. We develop a novel approach to assessing the utility of any spectral image using the target-implant method. This method is not limited by the requirements of traditional target detection performance assessment, which need ground truth and an adequate number of target pixels in the scene. The flexibility of this approach is demonstrated by assessing the utility of a wide range of real and simulated spectral imagery over a variety ii of target detection scenarios. The assessed image utility may be summarized to any desired level of specificity based on the image analysis requirements. We also present an approach to predicting spectral image utility that derives statistical parameters directly from an image and uses them to model target detection algorithm output. The image-derived predicted utility is directly comparable to the assessed utility and the accuracy of prediction is shown to improve with statistical models that capture the non-Gaussian behavior of real spectral image target detection algorithm outputs. The sensitivity of the proposed spectral image utility metric to various image chain parameters is examined in detail, revealing characteristics, requirements, and limitations that provide insight into the relative importance of parameters in the image utility. The results of these investigations lead to a better understanding of spectral image information vis-à-vis target detection performance that will hopefully prove useful to the spectral imagery analysis community and represent a step towards quantifying the ability of a spectral image to satisfy information exploitation requirements

    Error characterization of spectral products using a factorial designed experiment

    Get PDF
    The main objective of any imaging system is to collect information. Information is conveyed in remotely sensed imagery by the spatial and spectral distribution of the energy reflected/emitted from the earth. This energy is subsequently captured by an overhead imaging system. Post-processing algorithms, which rely on this spectral and spatial energy distribution, allow us to extract useful information from the collected data. Typically, spectral processing algorithms include such procedures as target detection, thematic mapping and spectral pixel unmixing. The final spectral products from these algorithms include detection maps, classification maps and endmember fraction maps. The spatial resolution, spectral sampling and signal-to-noise characteristics of a spectral imaging system share a strong relationship with one another based on the law of conservation of energy. If any one of these initial image collection parameters were changed then we would expect the accuracy of the information derived from the spectral processing algorithms to also change. The goal of this thesis study was to investigate the accuracy and effectiveness of spectral processing algorithms under different image levels of spectral resolution, spatial resolution and noise. In order to fulfill this goal a tool was developed that degrades hyperspectral images spatially, spectrally and by adding spectrally correlated noise. These degraded images were then subjected to several spectral processing algorithms. The information utility and error characterization of these degraded spectral products is assessed using algorithm-specific metrics. By adopting a factorial designed experimental approach, the joint effects of spatial resolution, spectral sampling and signal-to-noise with respect to algorithm performance was also studied. Finally, a quantitative performance comparison of the tested spectral processing algorithms was made
    • …
    corecore