394 research outputs found

    Joint bilateral filtering and spectral similarity-based sparse representation: A generic framework for effective feature extraction and data classification in hyperspectral imaging

    Get PDF
    Classification of hyperspectral images (HSI) has been a challenging problem under active investigation for years especially due to the extremely high data dimensionality and limited number of samples available for training. It is found that hyperspectral image classification can be generally improved only if the feature extraction technique and the classifier are both addressed. In this paper, a novel classification framework for hyperspectral images based on the joint bilateral filter and sparse representation classification (SRC) is proposed. By employing the first principal component as the guidance image for the joint bilateral filter, spatial features can be extracted with minimum edge blurring thus improving the quality of the band-to-band images. For this reason, the performance of the joint bilateral filter has shown better than that of the conventional bilateral filter in this work. In addition, the spectral similarity-based joint SRC (SS-JSRC) is proposed to overcome the weakness of the traditional JSRC method. By combining the joint bilateral filtering and SS-JSRC together, the superiority of the proposed classification framework is demonstrated with respect to several state-of-the-art spectral-spatial classification approaches commonly employed in the HSI community, with better classification accuracy and Kappa coefficient achieved

    Hyperspectral Unmixing Overview: Geometrical, Statistical, and Sparse Regression-Based Approaches

    Get PDF
    Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.Comment: This work has been accepted for publication in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensin

    Classification of hyperspectral images by exploiting spectral-spatial information of superpixel via multiple kernels

    Get PDF
    For the classification of hyperspectral images (HSIs), this paper presents a novel framework to effectively utilize the spectral-spatial information of superpixels via multiple kernels, termed as superpixel-based classification via multiple kernels (SC-MK). In HSI, each superpixel can be regarded as a shape-adaptive region which consists of a number of spatial-neighboring pixels with very similar spectral characteristics. Firstly, the proposed SC-MK method adopts an over-segmentation algorithm to cluster the HSI into many superpixels. Then, three kernels are separately employed for the utilization of the spectral information as well as spatial information within and among superpixels. Finally, the three kernels are combined together and incorporated into a support vector machines classifier. Experimental results on three widely used real HSIs indicate that the proposed SC-MK approach outperforms several well-known classification methods

    Sparse Subspace Clustering in Hyperspectral Images using Incomplete Pixels

    Get PDF
    El agrupamiento de imágenes espectrales es un método de clasificación no supervisada que identifica las distribuciones de píxeles utilizando información espectral sin necesidad de una etapa previa de entrenamiento. Los métodos basados ​​en agrupación de subespacio escasos (SSC) suponen que las imágenes hiperespectrales viven en la unión de múltiples subespacios de baja dimensión. Basado en esto, SSC asigna firmas espectrales a diferentes subespacios, expresando cada firma espectral como una combinación lineal escasa de todos los píxeles, garantizando que los elementos que no son cero pertenecen a la misma clase. Aunque estos métodos han demostrado una buena precisión para la clasificación no supervisada de imágenes hiperespectrales, a medida que aumenta el número de píxeles, es decir, la dimensión de la imagen es grande, la complejidad computacional se vuelve intratable. Por este motivo, este documento propone reducir el número de píxeles a clasificar en la imagen hiperespectral, y posteriormente, los resultados del agrupamiento para los píxeles faltantes se obtienen explotando la información espacial. Específicamente, este trabajo propone dos metodologías para remover los píxeles, la primera se basa en una distribución espacial de ruido azul que reduce la probabilidad de que se eliminen píxeles vecinos y la segunda es un procedimiento de submuestreo que elimina cada dos píxeles contiguos, preservando la estructura espacial de la escena. El rendimiento del algoritmo de agrupamiento de imágenes espectrales propuesto se evalúa en tres conjuntos de datos mostrando que se obtiene una precisión similar cuando se elimina hasta la mitad de los pixeles, además, es hasta 7.9 veces más rápido en comparación con la clasificación de los conjuntos de datos completos.Spectral image clustering is an unsupervised classification method which identifies distributions of pixels using spectral information without requiring a previous training stage. The sparse subspace clustering-based methods (SSC) assume that hyperspectral images lie in the union of multiple low-dimensional subspaces.  Using this, SSC groups spectral signatures in different subspaces, expressing each spectral signature as a sparse linear combination of all pixels, ensuring that the non-zero elements belong to the same class. Although these methods have shown good accuracy for unsupervised classification of hyperspectral images, the computational complexity becomes intractable as the number of pixels increases, i.e. when the spatial dimension of the image is large. For this reason, this paper proposes to reduce the number of pixels to be classified in the hyperspectral image, and later, the clustering results for the missing pixels are obtained by exploiting the spatial information. Specifically, this work proposes two methodologies to remove the pixels, the first one is based on spatial blue noise distribution which reduces the probability to remove cluster of neighboring pixels, and the second is a sub-sampling procedure that eliminates every two contiguous pixels, preserving the spatial structure of the scene. The performance of the proposed spectral image clustering framework is evaluated in three datasets showing that a similar accuracy is obtained when up to 50% of the pixels are removed, in addition, it is up to 7.9 times faster compared to the classification of the data sets without incomplete pixels

    Non-convex regularization in remote sensing

    Get PDF
    In this paper, we study the effect of different regularizers and their implications in high dimensional image classification and sparse linear unmixing. Although kernelization or sparse methods are globally accepted solutions for processing data in high dimensions, we present here a study on the impact of the form of regularization used and its parametrization. We consider regularization via traditional squared (2) and sparsity-promoting (1) norms, as well as more unconventional nonconvex regularizers (p and Log Sum Penalty). We compare their properties and advantages on several classification and linear unmixing tasks and provide advices on the choice of the best regularizer for the problem at hand. Finally, we also provide a fully functional toolbox for the community.Comment: 11 pages, 11 figure

    Proceedings of the second "international Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST'14)

    Get PDF
    The implicit objective of the biennial "international - Traveling Workshop on Interactions between Sparse models and Technology" (iTWIST) is to foster collaboration between international scientific teams by disseminating ideas through both specific oral/poster presentations and free discussions. For its second edition, the iTWIST workshop took place in the medieval and picturesque town of Namur in Belgium, from Wednesday August 27th till Friday August 29th, 2014. The workshop was conveniently located in "The Arsenal" building within walking distance of both hotels and town center. iTWIST'14 has gathered about 70 international participants and has featured 9 invited talks, 10 oral presentations, and 14 posters on the following themes, all related to the theory, application and generalization of the "sparsity paradigm": Sparsity-driven data sensing and processing; Union of low dimensional subspaces; Beyond linear and convex inverse problem; Matrix/manifold/graph sensing/processing; Blind inverse problems and dictionary learning; Sparsity and computational neuroscience; Information theory, geometry and randomness; Complexity/accuracy tradeoffs in numerical methods; Sparsity? What's next?; Sparse machine learning and inference.Comment: 69 pages, 24 extended abstracts, iTWIST'14 website: http://sites.google.com/site/itwist1
    corecore