191 research outputs found

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    A Neural Network Method for Mixture Estimation for Vegetation Mapping

    Full text link
    While most forest maps identify only the dominant vegetation class in delineated stands, individual stands are often better characterized by a mix of vegetation types. Many land management applications, including wildlife habitat studies, can benefit from knowledge of mixes. This paper examines various algorithms that use data from the Landsat Thematic Mapper (TM) satellite to estimate mixtures of vegetation types within forest stands. Included in the study are maximum likelihood classification and linear mixture models as well as a new methodology based on the ARTMAP neural network. Two paradigms are considered: classification methods, which describe stand-level vegetation mixtures as mosaics of pixels, each identified with its primary vegetation class; and mixture methods, which treat samples as blends of vegetation, even at the pixel level. Comparative analysis of these mixture estimation methods, tested on data from the Plumas National Forest, yields the following conclusions: (1) accurate estimates of proportions of hardwood and conifer cover within stands can be obtained, particularly when brush is not present in the understory; (2) ARTMAP outperforms statistical methods and linear mixture models in both the classification and the mixture paradigms; (3) topographic correction fails to improve mapping accuracy; and (4) the new ARTMAP mixture system produces the most accurate overall results. The Plumas data set has been made available to other researchers for further development of new mapping methods and comparison with the quantitative studies presented here, which establish initial benchmark standards.National Science Foundation (IRI 94-0165, SBR 95-13889); Office of Naval Research (N00014-95-1-0409, N00014-95-0657); Region 5 Remote Sensing Laboratory of the U.S. Forest Servic

    Cyclic Self-Organizing Map for Object Recognition

    Get PDF
    Object recognition is an important machine learning (ML) application. To have a robust ML application, we need three major steps: (1) preprocessing (i.e. preparing the data for the ML algorithms); (2) using appropriate segmentation and feature extraction algorithms to abstract the core features data and (3) applying feature classification or feature recognition algorithms. The quality of the ML algorithm depends on a good representation of the data. Data representation requires the extraction of features with an appropriate learning rate. Learning rate influences how the algorithm will learn about the data or how the data will be processed and treated. Generally, this parameter is found on a trial-and-error basis and scholars sometimes set it to be constant. This paper presents a new optimization technique for object recognition problems called Cyclic-SOM by accelerating the learning process of the self-organizing map (SOM) using a non-constant learning rate. SOM uses the Euclidean distance to measure the similarity between the inputs and the features maps. Our algorithm considers image correlation using mean absolute difference instead of traditional Euclidean distance. It uses cyclical learning rates to get high performance with a better recognition rate. Cyclic-SOM possesses the following merits: (1) it accelerates the learning process and eliminates the need to experimentally find the best values and schedule for the learning rates; (2) it offers one form of improvement in both results and training; (3) it requires no manual tuning of the learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyper-parameters and (4) it shows promising results compared to other methods on different datasets. Three wide benchmark databases illustrate the efficiency of the proposed technique: AHD Base for Arabic digits, MNIST for English digits, and CMU-PIE for faces

    Image inpainting based on self-organizing maps by using multi-agent implementation

    Get PDF
    AbstractThe image inpainting is a well-known task of visual editing. However, the efficiency strongly depends on sizes and textural neighborhood of “missing” area. Various methods of image inpainting exist, among which the Kohonen Self-Organizing Map (SOM) network as a mean of unsupervised learning is widely used. The weaknesses of the Kohonen SOM network such as the necessity for tuning of algorithm parameters and the low computational speed caused the application of multi- agent system with a multi-mapping possibility and a parallel processing by the identical agents. During experiments, it was shown that the preliminary image segmentation and the creation of the SOMs for each type of homogeneous textures provide better results in comparison with the classical SOM application. Also the optimal number of inpainting agents was determined. The quality of inpainting was estimated by several metrics, and good results were obtained in complex images

    Levee Slide Detection using Synthetic Aperture Radar Magnitude and Phase

    Get PDF
    The objectives of this research are to support the development of state-of-the-art methods using remotely sensed data to detect slides or anomalies in an efficient and cost-effective manner based on the use of SAR technology. Slough or slump slides are slope failures along a levee, which leave areas of the levee vulnerable to seepage and failure during high water events. This work investigates the facility of detecting the slough slides on an earthen levee with different types of polarimetric Synthetic Aperture Radar (polSAR) imagery. The source SAR imagery is fully quad-polarimetric L-band data from the NASA Jet Propulsion Laboratory’s (JPL’s) Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR). The study area encompasses a portion of the levees of the lower Mississippi river, located in Mississippi, United States. The obtained classification results reveal that the polSAR data unsupervised classification with features extraction produces more appropriate results than the unsupervised classification with no features extraction. Obviously, supervised classification methods provide better classification results compared to the unsupervised methods. The anomaly identification is good with these results and was improved with the use of a majority filter. The classification accuracy is further improved with a morphology filter. The classification accuracy is significantly improved with the use of GLCM features. The classification results obtained for all three cases (magnitude, phase, and complex data), with classification accuracies for the complex data being higher, indicate that the use of synthetic aperture radar in combination with remote sensing imagery can effectively detect anomalies or slides on an earthen levee. For all the three samples it consistently shows that the accuracies for the complex data are higher when compared to those from the magnitude and phase data alone. The tests comparing complex data features to magnitude and phase data alone, and full complex data, and use of post-processing filter, all had very high accuracy. Hence we included more test samples to validate and distinguish results

    Assessing and Enabling Independent Component Analysis As A Hyperspectral Unmixing Approach

    Get PDF
    As a result of its capacity for material discrimination, hyperspectral imaging has been utilized for applications ranging from mining to agriculture to planetary exploration. One of the most common methods of exploiting hyperspectral images is spectral unmixing, which is used to discriminate and locate the various types of materials that are present in the scene. When this processing is done without the aid of a reference library of material spectra, the problem is called blind or unsupervised spectral unmixing. Independent component analysis (ICA) is a blind source separation approach that operates by finding outputs, called independent components, that are statistically independent. ICA has been applied to the unsupervised spectral unmixing problem, producing intriguing, if somewhat unsatisfying results. This dissatisfaction stems from the fact that independent components are subject to a scale ambiguity which must be resolved before they can be used effectively in the context of the spectral unmixing problem. In this dissertation, ICA is explored as a spectral unmixing approach. Various processing steps that are common in many ICA algorithms are examined to assess their impact on spectral unmixing results. Synthetically-generated but physically-realistic data are used to allow the assessment to be quantitative rather than qualitative only. Additionally, two algorithms, class-based abundance rescaling (CBAR) and extended class-based abundance rescaling (CBAR-X), are introduced to enable accurate rescaling of independent components. Experimental results demonstrate the improved rescaling accuracy provided by the CBAR and CBAR-X algorithms, as well as the general viability of ICA as a spectral unmixing approach
    corecore