4 research outputs found

    Hyperspectral Image Denoising With Group Sparse and Low-Rank Tensor Decomposition

    Get PDF
    Hyperspectral image (HSI) is usually corrupted by various types of noise, including Gaussian noise, impulse noise, stripes, deadlines, and so on. Recently, sparse and low-rank matrix decomposition (SLRMD) has demonstrated to be an effective tool in HSI denoising. However, the matrix-based SLRMD technique cannot fully take the advantage of spatial and spectral information in a 3-D HSI data. In this paper, a novel group sparse and low-rank tensor decomposition (GSLRTD) method is proposed to remove different kinds of noise in HSI, while still well preserving spectral and spatial characteristics. Since a clean 3-D HSI data can be regarded as a 3-D tensor, the proposed GSLRTD method formulates a HSI recovery problem into a sparse and low-rank tensor decomposition framework. Specifically, the HSI is first divided into a set of overlapping 3-D tensor cubes, which are then clustered into groups by K-means algorithm. Then, each group contains similar tensor cubes, which can be constructed as a new tensor by unfolding these similar tensors into a set of matrices and stacking them. Finally, the SLRTD model is introduced to generate noisefree estimation for each group tensor. By aggregating all reconstructed group tensors, we can reconstruct a denoised HSI. Experiments on both simulated and real HSI data sets demonstrate the effectiveness of the proposed method.This paper was supported in part by the National Natural Science Foundation of China under Grant 61301255, Grant 61771192, and Grant 61471167, in part by the National Natural Science Fund of China for Distinguished Young Scholars under Grant 61325007, in part by the National Natural Science Fund of China for International Cooperation and Exchanges under Grant 61520106001, and in part by the Science and Technology Plan Project Fund of Hunan Province under Grant 2015WK3001 and Grant 2017RS3024.Peer Reviewe

    Fractional Snow-Cover Mapping Through Artificial Neural Network Analysis of MODIS Surface Reflectance.

    Get PDF
    Accurate areal measurements of snow-cover extent are important for hydrological and climate modeling. The traditional method of mapping snow cover is binary where a pixel is approximated to either snow-covered or snow-free. Fractional snow cover (FSC) mapping achieves a more precise estimate of areal snow-cover extent by determining the fraction of a pixel that is snow-covered. The two most common FSC methods using Moderate Resolution Imaging Spectroradiometer (MODIS) images are linear spectral unmixing and the empirical Normalized Difference Snow Index (NDSI) method. Machine learning is an alternative to these approaches for estimating FSC, as Artificial Neural Networks (ANNs) have been used for estimating the subpixel abundances of other surfaces. The advantages of ANNs over the other approaches are that they can easily incorporate auxiliary information such as land-cover type and are capable of learning nonlinear relationships between surface reflectance and snow fraction. ANNs are especially applicable to mapping snow-cover extent in forested areas where spatial mixing of surface components is nonlinear. This study developed an ANN approach to snow-fraction mapping. A feed-forward ANN was trained with backpropagation to estimate FSC from MODIS surface reflectance, NDSI, Normalized Difference Vegetation Index (NDVI) and land cover as inputs. The ANN was trained and validated with high spatial-resolution FSC derived from Landsat Enhanced Thematic Mapper Plus (ETM+) binary snow-cover maps. ANN achieved best result in terms of extent of snow-covered area over evergreen forests, where the extent of snow cover was slightly overestimated. Scatter plot graphs of the ANN and reference FSC showed that the neural network tended to underestimate snow fraction in high FSC and overestimate it in low FSC. The developed ANN compared favorably to the standard MODIS FSC product with the two methods estimating the same amount of total snow-covered area in the test scenes

    Desarrollo sobre GPU de t茅cnicas para la detecci贸n de objetivos en im谩genes hiperespectrales mediante la utilizaci贸n de redes neuronales

    Get PDF
    Traballo F铆n de M谩ster en Computaci贸n de Altas Prestaci贸ns. Curso 2010-2011En este trabajo se presentan dos algoritmos de detecci贸n de objetivos en im谩genes hiperespectrales espec铆ficamente desarrollados para su implementaci贸n sobre GPU, ambos basados en la aplicaci贸n de ANNs (Artificial Neural Networks). El primer algoritmo, denominado algoritmo de detecci贸n de objetivos a nivel de p铆xel, basa su b煤squeda en la exploraci贸n p铆xel a p铆xel de la imagen hiperespectral, detectando si en cada uno de ellos se encuentra el objetivo buscado, o una parte del mismo. El segundo algoritmo, denominado algoritmo de detecci贸n de objetivos multi-resoluci贸n, basa su b煤squeda en la exploraci贸n jer谩rquica de 谩reas de tama帽o decreciente de imagen (vol煤menes hiperespectrales), detectando y acotando el objetivo independientemente de la escala a la que este se encuentre. En la implementaci贸n sobre GPU de las ANNs utilizadas en ambos algoritmos se analizan dos aproximaciones diferentes de paralelizaci贸n: paralelismo a nivel neuronal, y paralelismo a nivel de enlace sin谩ptico. Adem谩s, se tienen en cuenta un gran n煤mero de estrategias de optimizaci贸n espec铆ficas para GPU, con el f铆n de explotar adecuadamente la enorme capacidad de c贸mputo de las tarjetas, y de ocultar la latencia en los accesos a memoria. En la fase de resultados los algoritmos son testeados mediante la b煤squeda de objetivos sobre dos tipos diferentes de im谩genes hiperespectrales, una aplicada al reconocimiento de materiales, y otra aplicada a funciones de b煤squeda y rescate. Los tiempos de ejecuci贸n obtenidos muestran la efectividad de los algoritmos de detecci贸n desarrollados, as铆 como la conveniencia de su implementaci贸n sobre GPU

    A Deep Learning Framework in Selected Remote Sensing Applications

    Get PDF
    The main research topic is designing and implementing a deep learning framework applied to remote sensing. Remote sensing techniques and applications play a crucial role in observing the Earth evolution, especially nowadays, where the effects of climate change on our life is more and more evident. A considerable amount of data are daily acquired all over the Earth. Effective exploitation of this information requires the robustness, velocity and accuracy of deep learning. This emerging need inspired the choice of this topic. The conducted studies mainly focus on two European Space Agency (ESA) missions: Sentinel 1 and Sentinel 2. Images provided by the ESA Sentinel-2 mission are rapidly becoming the main source of information for the entire remote sensing community, thanks to their unprecedented combination of spatial, spectral and temporal resolution, as well as their open access policy. The increasing interest gained by these satellites in the research laboratory and applicative scenarios pushed us to utilize them in the considered framework. The combined use of Sentinel 1 and Sentinel 2 is crucial and very prominent in different contexts and different kinds of monitoring when the growing (or changing) dynamics are very rapid. Starting from this general framework, two specific research activities were identified and investigated, leading to the results presented in this dissertation. Both these studies can be placed in the context of data fusion. The first activity deals with a super-resolution framework to improve Sentinel 2 bands supplied at 20 meters up to 10 meters. Increasing the spatial resolution of these bands is of great interest in many remote sensing applications, particularly in monitoring vegetation, rivers, forests, and so on. The second topic of the deep learning framework has been applied to the multispectral Normalized Difference Vegetation Index (NDVI) extraction, and the semantic segmentation obtained fusing Sentinel 1 and S2 data. The S1 SAR data is of great importance for the quantity of information extracted in the context of monitoring wetlands, rivers and forests, and many other contexts. In both cases, the problem was addressed with deep learning techniques, and in both cases, very lean architectures were used, demonstrating that even without the availability of computing power, it is possible to obtain high-level results. The core of this framework is a Convolutional Neural Network (CNN). {CNNs have been successfully applied to many image processing problems, like super-resolution, pansharpening, classification, and others, because of several advantages such as (i) the capability to approximate complex non-linear functions, (ii) the ease of training that allows to avoid time-consuming handcraft filter design, (iii) the parallel computational architecture. Even if a large amount of "labelled" data is required for training, the CNN performances pushed me to this architectural choice.} In our S1 and S2 integration task, we have faced and overcome the problem of manually labelled data with an approach based on integrating these two different sensors. Therefore, apart from the investigation in Sentinel-1 and Sentinel-2 integration, the main contribution in both cases of these works is, in particular, the possibility of designing a CNN-based solution that can be distinguished by its lightness from a computational point of view and consequent substantial saving of time compared to more complex deep learning state-of-the-art solutions
    corecore