278 research outputs found

    QUEST Hierarchy for Hyperspectral Face Recognition

    Get PDF
    Face recognition is an attractive biometric due to the ease in which photographs of the human face can be acquired and processed. The non-intrusive ability of many surveillance systems permits face recognition applications to be used in a myriad of environments. Despite decades of impressive research in this area, face recognition still struggles with variations in illumination, pose and expression not to mention the larger challenge of willful circumvention. The integration of supporting contextual information in a fusion hierarchy known as QUalia Exploitation of Sensor Technology (QUEST) is a novel approach for hyperspectral face recognition that results in performance advantages and a robustness not seen in leading face recognition methodologies. This research demonstrates a method for the exploitation of hyperspectral imagery and the intelligent processing of contextual layers of spatial, spectral, and temporal information. This approach illustrates the benefit of integrating spatial and spectral domains of imagery for the automatic extraction and integration of novel soft features (biometric). The establishment of the QUEST methodology for face recognition results in an engineering advantage in both performance and efficiency compared to leading and classical face recognition techniques. An interactive environment for the testing and expansion of this recognition framework is also provided

    Using Lidar to geometrically-constrain signature spaces for physics-based target detection

    Get PDF
    A fundamental task when performing target detection on spectral imagery is ensuring that a target signature is in the same metric domain as the measured spectral data set. Remotely sensed data are typically collected in digital counts and calibrated to radiance. That is, calibrated data have units of spectral radiance, while target signatures in the visible regime are commonly characterized in units of re°ectance. A necessary precursor to running a target detection algorithm is converting the measured scene data and target signature to the same domain. Atmospheric inversion or compensation is a well-known method for transforming mea- sured scene radiance values into the re°ectance domain. While this method may be math- ematically trivial, it is computationally attractive and is most e®ective when illumination conditions are constant across a scene. However, when illumination conditions are not con- stant for a given scene, signi¯cant error may be introduced when applying the same linear inversion globally. In contrast to the inversion methodology, physics-based forward modeling approaches aim to predict the possible ways that a target might appear in a scene using atmospheric and radiometric models. To fully encompass possible target variability due to changing illumination levels, a target vector space is created. In addition to accounting for varying illumination, physics-based model approaches have a distinct advantage in that they can also incorporate target variability due to a variety of other sources, to include adjacency target orientation, and mixed pixels. Increasing the variability of the target vector space may be beneficial in a global sense in that it may allow for the detection of difficult targets, such as shadowed or partially concealed targets. However, it should also be noted that expansion of the target space may introduce unnecessary confusion for a given pixel. Furthermore, traditional physics-based approaches make certain assumptions which may be prudent only when passive, spectral data for a scene are available. Common examples include the assumption of a °at ground plane and pure target pixels. Many of these assumptions may be attributed to the lack of three-dimensional (3D) spatial information for the scene. In the event that 3D spatial information were available, certain assumptions could be levied, allowing accurate geometric information to be fed to the physics-based model on a pixel- by-pixel basis. Doing so may e®ectively constrain the physics-based model, resulting in a pixel-specific target space with optimized variability and minimized confusion. This body of work explores using spatial information from a topographic Light Detection and Ranging (Lidar) system as a means to enhance the delity of physics-based models for spectral target detection. The incorporation of subpixel spatial information, relative to a hyperspectral image (HSI) pixel, provides valuable insight about plausible geometric con¯gurations of a target, background, and illumination sources within a scene. Methods for estimating local geometry on a per-pixel basis are introduced; this spatial information is then fed into a physics-based model to the forward prediction of a target in radiance space. The target detection performance based on this spatially-enhanced, spectral target space is assessed relative to current state-of-the-art spectral algorithms

    Techniques for automatic large scale change analysis of temporal multispectral imagery

    Get PDF
    Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst\u27s job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change

    A robust dynamic classifier selection approach for hyperspectral images with imprecise label information

    Get PDF
    Supervised hyperspectral image (HSI) classification relies on accurate label information. However, it is not always possible to collect perfectly accurate labels for training samples. This motivates the development of classifiers that are sufficiently robust to some reasonable amounts of errors in data labels. Despite the growing importance of this aspect, it has not been sufficiently studied in the literature yet. In this paper, we analyze the effect of erroneous sample labels on probability distributions of the principal components of HSIs, and provide in this way a statistical analysis of the resulting uncertainty in classifiers. Building on the theory of imprecise probabilities, we develop a novel robust dynamic classifier selection (R-DCS) model for data classification with erroneous labels. Particularly, spectral and spatial features are extracted from HSIs to construct two individual classifiers for the dynamic selection, respectively. The proposed R-DCS model is based on the robustness of the classifiers’ predictions: the extent to which a classifier can be altered without changing its prediction. We provide three possible selection strategies for the proposed model with different computational complexities and apply them on three benchmark data sets. Experimental results demonstrate that the proposed model outperforms the individual classifiers it selects from and is more robust to errors in labels compared to widely adopted approaches

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    DNN-based PolSAR image classification on noisy labels

    Get PDF
    Deep neural networks (DNNs) appear to be a solution for the classification of polarimetric synthetic aperture radar (PolSAR) data in that they outperform classical supervised classifiers under the condition of sufficient training samples. The design of a classifier is challenging because DNNs can easily overfit due to limited remote sensing training samples and unavoidable noisy labels. In this article, a softmax loss strategy with antinoise capability, namely, the probability-aware sample grading strategy (PASGS), is developed to overcome this limitation. Combined with the proposed softmax loss strategy, two classical DNN-based classifiers are implemented to perform PolSAR image classification to demonstrate its effectiveness. In this framework, the difference distribution implicitly reflects the probability that a training sample is clean, and clean labels can be distinguished from noisy labels according to the method of probability statistics. Then, this probability is employed to reweight the corresponding loss of each training sample during the training process to locate the noisy data and to prevent participation in the loss calculation of the neural network. As the number of training iterations increases, the condition of the probability statistics of the noisy labels will be constantly adjusted without supervision, and the clean labels will eventually be identified to train the neural network. Experiments on three PolSAR datasets with two DNN-based methods also demonstrate that the proposed method is superior to state-of-the-art methods.This work was supported in part by the National Natural Science Foundation of China under Grant 61871413 and Grant 61801015, in part by the Fundamental Research Funds for the Central Universities under Grant XK2020-03, in part by China Scholarship Council under Grant 2020006880033, and in part by Grant PID2020-114623RB-C32 funded by MCIN/AEI/10.13039/501100011033.Peer ReviewedPostprint (published version

    Illumination Invariant Outdoor Perception

    Get PDF
    This thesis proposes the use of a multi-modal sensor approach to achieve illumination invariance in images taken in outdoor environments. The approach is automatic in that it does not require user input for initialisation, and is not reliant on the input of atmospheric radiative transfer models. While it is common to use pixel colour and intensity as features in high level vision algorithms, their performance is severely limited by the uncontrolled lighting and complex geometric structure of outdoor scenes. The appearance of a material is dependent on the incident illumination, which can vary due to spatial and temporal factors. This variability causes identical materials to appear differently depending on their location. Illumination invariant representations of the scene can potentially improve the performance of high level vision algorithms as they allow discrimination between pixels to occur based on the underlying material characteristics. The proposed approach to obtaining illumination invariance utilises fused image and geometric data. An approximation of the outdoor illumination is used to derive per-pixel scaling factors. This has the effect of relighting the entire scene using a single illuminant that is common in terms of colour and intensity for all pixels. The approach is extended to radiometric normalisation and the multi-image scenario, meaning that the resultant dataset is both spatially and temporally illumination invariant. The proposed illumination invariance approach is evaluated on several datasets and shows that spatial and temporal invariance can be achieved without loss of spectral dimensionality. The system requires very few tuning parameters, meaning that expert knowledge is not required in order for its operation. This has potential implications for robotics and remote sensing applications where perception systems play an integral role in developing a rich understanding of the scene

    Hyperspectral Imagery Target Detection Using Improved Anomaly Detection and Signature Matching Methods

    Get PDF
    This research extends the field of hyperspectral target detection by developing autonomous anomaly detection and signature matching methodologies that reduce false alarms relative to existing benchmark detectors, and are practical for use in an operational environment. The proposed anomaly detection methodology adapts multivariate outlier detection algorithms for use with hyperspectral datasets containing tens of thousands of non-homogeneous, high-dimensional spectral signatures. In so doing, the limitations of existing, non-robust, anomaly detectors are identified, an autonomous clustering methodology is developed to divide an image into homogeneous background materials, and competing multivariate outlier detection methods are evaluated for their ability to uncover hyperspectral anomalies. To arrive at a final detection algorithm, robust parameter design methods are employed to determine parameter settings that achieve good detection performance over a range of hyperspectral images and targets, thereby removing the burden of these decisions from the user. The final anomaly detection algorithm is tested against existing local and global anomaly detectors, and is shown to achieve superior detection accuracy when applied to a diverse set of hyperspectral images. The proposed signature matching methodology employs image-based atmospheric correction techniques in an automated process to transform a target reflectance signature library into a set of image signatures. This set of signatures is combined with an existing linear filter to form a target detector that is shown to perform as well or better relative to detectors that rely on complicated, information-intensive, atmospheric correction schemes. The performance of the proposed methodology is assessed using a range of target materials in both woodland and desert hyperspectral scenes
    • …
    corecore