167 research outputs found

    Computational Methods for Matrix/Tensor Factorization and Deep Learning Image Denoising

    Get PDF
    Feature learning is a technique to automatically extract features from raw data. It is widely used in areas such as computer vision, image processing, data mining and natural language processing. In this thesis, we are interested in the computational aspects of feature learning. We focus on rank matrix and tensor factorization and deep neural network models for image denoising. With respect to matrix and tensor factorization, we first present a technique to speed up alternating least squares (ALS) and gradient descent (GD) − two commonly used strategies for tensor factorization. We introduce an efficient, scalable and distributed algorithm that addresses the data explosion problem. Instead of a computationally challenging sub-step of ALS and GD, we implement the algorithm on parallel machines by using only two sparse matrix-vector products. Not only is the algorithm scalable but it is also on average 4 to 10 times faster than competing algorithms on various data sets. Next, we discuss our results of non-negative matrix factorization for hyperspectral image data in the presence of noise. We introduce a spectral total variation regularization and derive four variants of the alternating direction method of multiplier algorithm. While all four methods belong to the same family of algorithms, some perform better than others. Thus, we compare the algorithms using stimulated Raman spectroscopic image will be demonstrated. For deep neural network models, we focus on its application to image denoising. We first demonstrate how an optimal procedure leveraging deep neural networks and convex optimization can combine a given set of denoisers to produce an overall better result. The proposed framework estimates the mean squared error (MSE) of individual denoised outputs using a deep neural network; optimally combines the denoised outputs via convex optimization; and recovers lost details of the combined images using another deep neural network. The framework consistently improves denoising performance for both deterministic denoisers and neural network denoisers. Next, we apply the deep neural network to solve the image reconstruction issues of the Quanta Image Sensor (QIS), which is a single-photon image sensor that oversamples the light field to generate binary measures

    Image Processing and Machine Learning for Hyperspectral Unmixing: An Overview and the HySUPP Python Package

    Full text link
    Spectral pixels are often a mixture of the pure spectra of the materials, called endmembers, due to the low spatial resolution of hyperspectral sensors, double scattering, and intimate mixtures of materials in the scenes. Unmixing estimates the fractional abundances of the endmembers within the pixel. Depending on the prior knowledge of endmembers, linear unmixing can be divided into three main groups: supervised, semi-supervised, and unsupervised (blind) linear unmixing. Advances in Image processing and machine learning substantially affected unmixing. This paper provides an overview of advanced and conventional unmixing approaches. Additionally, we draw a critical comparison between advanced and conventional techniques from the three categories. We compare the performance of the unmixing techniques on three simulated and two real datasets. The experimental results reveal the advantages of different unmixing categories for different unmixing scenarios. Moreover, we provide an open-source Python-based package available at https://github.com/BehnoodRasti/HySUPP to reproduce the results

    Hyperspectral Image Analysis through Unsupervised Deep Learning

    Get PDF
    Hyperspectral image (HSI) analysis has become an active research area in computer vision field with a wide range of applications. However, in order to yield better recognition and analysis results, we need to address two challenging issues of HSI, i.e., the existence of mixed pixels and its significantly low spatial resolution (LR). In this dissertation, spectral unmixing (SU) and hyperspectral image super-resolution (HSI-SR) approaches are developed to address these two issues with advanced deep learning models in an unsupervised fashion. A specific application, anomaly detection, is also studied, to show the importance of SU.Although deep learning has achieved the state-of-the-art performance on supervised problems, its practice on unsupervised problems has not been fully developed. To address the problem of SU, an untied denoising autoencoder is proposed to decompose the HSI into endmembers and abundances with non-negative and abundance sum-to-one constraints. The denoising capacity is incorporated into the network with a sparsity constraint to boost the performance of endmember extraction and abundance estimation.Moreover, the first attempt is made to solve the problem of HSI-SR using an unsupervised encoder-decoder architecture by fusing the LR HSI with the high-resolution multispectral image (MSI). The architecture is composed of two encoder-decoder networks, coupled through a shared decoder, to preserve the rich spectral information from the HSI network. It encourages the representations from both modalities to follow a sparse Dirichlet distribution which naturally incorporates the two physical constraints of HSI and MSI. And the angular difference between representations are minimized to reduce the spectral distortion.Finally, a novel detection algorithm is proposed through spectral unmixing and dictionary based low-rank decomposition, where the dictionary is constructed with mean-shift clustering and the coefficients of the dictionary is encouraged to be low-rank. Experimental evaluations show significant improvement on the performance of anomaly detection conducted on the abundances (through SU).The effectiveness of the proposed approaches has been evaluated thoroughly by extensive experiments, to achieve the state-of-the-art results

    Interpretable Hyperspectral AI: When Non-Convex Modeling meets Hyperspectral Remote Sensing

    Full text link
    Hyperspectral imaging, also known as image spectrometry, is a landmark technique in geoscience and remote sensing (RS). In the past decade, enormous efforts have been made to process and analyze these hyperspectral (HS) products mainly by means of seasoned experts. However, with the ever-growing volume of data, the bulk of costs in manpower and material resources poses new challenges on reducing the burden of manual labor and improving efficiency. For this reason, it is, therefore, urgent to develop more intelligent and automatic approaches for various HS RS applications. Machine learning (ML) tools with convex optimization have successfully undertaken the tasks of numerous artificial intelligence (AI)-related applications. However, their ability in handling complex practical problems remains limited, particularly for HS data, due to the effects of various spectral variabilities in the process of HS imaging and the complexity and redundancy of higher dimensional HS signals. Compared to the convex models, non-convex modeling, which is capable of characterizing more complex real scenes and providing the model interpretability technically and theoretically, has been proven to be a feasible solution to reduce the gap between challenging HS vision tasks and currently advanced intelligent data processing models
    • …
    corecore