620 research outputs found

    Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing

    Get PDF
    Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc. Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset. One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis. In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems

    Two and three dimensional segmentation of multimodal imagery

    Get PDF
    The role of segmentation in the realms of image understanding/analysis, computer vision, pattern recognition, remote sensing and medical imaging in recent years has been significantly augmented due to accelerated scientific advances made in the acquisition of image data. This low-level analysis protocol is critical to numerous applications, with the primary goal of expediting and improving the effectiveness of subsequent high-level operations by providing a condensed and pertinent representation of image information. In this research, we propose a novel unsupervised segmentation framework for facilitating meaningful segregation of 2-D/3-D image data across multiple modalities (color, remote-sensing and biomedical imaging) into non-overlapping partitions using several spatial-spectral attributes. Initially, our framework exploits the information obtained from detecting edges inherent in the data. To this effect, by using a vector gradient detection technique, pixels without edges are grouped and individually labeled to partition some initial portion of the input image content. Pixels that contain higher gradient densities are included by the dynamic generation of segments as the algorithm progresses to generate an initial region map. Subsequently, texture modeling is performed and the obtained gradient, texture and intensity information along with the aforementioned initial partition map are used to perform a multivariate refinement procedure, to fuse groups with similar characteristics yielding the final output segmentation. Experimental results obtained in comparison to published/state-of the-art segmentation techniques for color as well as multi/hyperspectral imagery, demonstrate the advantages of the proposed method. Furthermore, for the purpose of achieving improved computational efficiency we propose an extension of the aforestated methodology in a multi-resolution framework, demonstrated on color images. Finally, this research also encompasses a 3-D extension of the aforementioned algorithm demonstrated on medical (Magnetic Resonance Imaging / Computed Tomography) volumes

    A novel band selection and spatial noise reduction method for hyperspectral image classification.

    Get PDF
    As an essential reprocessing method, dimensionality reduction (DR) can reduce the data redundancy and improve the performance of hyperspectral image (HSI) classification. A novel unsupervised DR framework with feature interpretability, which integrates both band selection (BS) and spatial noise reduction method, is proposed to extract low-dimensional spectral-spatial features of HSI. We proposed a new Neighboring band Grouping and Normalized Matching Filter (NGNMF) for BS, which can reduce the data dimension whilst preserve the corresponding spectral information. An enhanced 2-D singular spectrum analysis (E2DSSA) method is also proposed to extract the spatial context and structural information from each selected band, aiming to decrease the intra-class variability and reduce the effect of noise in the spatial domain. The support vector machine (SVM) classifier is used to evaluate the effectiveness of the extracted spectral-spatial low-dimensional features. Experimental results on three publicly available HSI datasets have fully demonstrated the efficacy of the proposed NGNMF-E2DSSA method, which has surpassed a number of state-of-the-art DR methods

    Multiple Spectral-Spatial Classification Approach for Hyperspectral Data

    Get PDF
    A .new multiple classifier approach for spectral-spatial classification of hyperspectral images is proposed. Several classifiers are used independently to classify an image. For every pixel, if all the classifiers have assigned this pixel to the same class, the pixel is kept as a marker, i.e., a seed of the spatial region, with the corresponding class label. We propose to use spectral-spatial classifiers at the preliminary step of the marker selection procedure, each of them combining the results of a pixel-wise classification and a segmentation map. Different segmentation methods based on dissimilar principles lead to different classification results. Furthermore, a minimum spanning forest is built, where each tree is rooted on a classification -driven marker and forms a region in the spectral -spatial classification: map. Experimental results are presented for two hyperspectral airborne images. The proposed method significantly improves classification accuracies, when compared to previously proposed classification techniques

    Sparse Subspace Clustering in Hyperspectral Images using Incomplete Pixels

    Get PDF
    El agrupamiento de imágenes espectrales es un método de clasificación no supervisada que identifica las distribuciones de píxeles utilizando información espectral sin necesidad de una etapa previa de entrenamiento. Los métodos basados ​​en agrupación de subespacio escasos (SSC) suponen que las imágenes hiperespectrales viven en la unión de múltiples subespacios de baja dimensión. Basado en esto, SSC asigna firmas espectrales a diferentes subespacios, expresando cada firma espectral como una combinación lineal escasa de todos los píxeles, garantizando que los elementos que no son cero pertenecen a la misma clase. Aunque estos métodos han demostrado una buena precisión para la clasificación no supervisada de imágenes hiperespectrales, a medida que aumenta el número de píxeles, es decir, la dimensión de la imagen es grande, la complejidad computacional se vuelve intratable. Por este motivo, este documento propone reducir el número de píxeles a clasificar en la imagen hiperespectral, y posteriormente, los resultados del agrupamiento para los píxeles faltantes se obtienen explotando la información espacial. Específicamente, este trabajo propone dos metodologías para remover los píxeles, la primera se basa en una distribución espacial de ruido azul que reduce la probabilidad de que se eliminen píxeles vecinos y la segunda es un procedimiento de submuestreo que elimina cada dos píxeles contiguos, preservando la estructura espacial de la escena. El rendimiento del algoritmo de agrupamiento de imágenes espectrales propuesto se evalúa en tres conjuntos de datos mostrando que se obtiene una precisión similar cuando se elimina hasta la mitad de los pixeles, además, es hasta 7.9 veces más rápido en comparación con la clasificación de los conjuntos de datos completos.Spectral image clustering is an unsupervised classification method which identifies distributions of pixels using spectral information without requiring a previous training stage. The sparse subspace clustering-based methods (SSC) assume that hyperspectral images lie in the union of multiple low-dimensional subspaces.  Using this, SSC groups spectral signatures in different subspaces, expressing each spectral signature as a sparse linear combination of all pixels, ensuring that the non-zero elements belong to the same class. Although these methods have shown good accuracy for unsupervised classification of hyperspectral images, the computational complexity becomes intractable as the number of pixels increases, i.e. when the spatial dimension of the image is large. For this reason, this paper proposes to reduce the number of pixels to be classified in the hyperspectral image, and later, the clustering results for the missing pixels are obtained by exploiting the spatial information. Specifically, this work proposes two methodologies to remove the pixels, the first one is based on spatial blue noise distribution which reduces the probability to remove cluster of neighboring pixels, and the second is a sub-sampling procedure that eliminates every two contiguous pixels, preserving the spatial structure of the scene. The performance of the proposed spectral image clustering framework is evaluated in three datasets showing that a similar accuracy is obtained when up to 50% of the pixels are removed, in addition, it is up to 7.9 times faster compared to the classification of the data sets without incomplete pixels
    corecore