41 research outputs found
Tensor-based Hyperspectral Image Processing Methodology and its Applications in Impervious Surface and Land Cover Mapping
The emergence of hyperspectral imaging provides a new perspective for Earth observation, in addition to previously available orthophoto and multispectral imagery. This thesis focused on both the new data and new methodology in the field of hyperspectral imaging. First, the application of the future hyperspectral satellite EnMAP in impervious surface area (ISA) mapping was studied. During the search for the appropriate ISA mapping procedure for the new data, the subpixel classification based on nonnegative matrix factorization (NMF) achieved the best success. The simulated EnMAP image shows great potential in urban ISA mapping with over 85% accuracy.
Unfortunately, the NMF based on the linear algebra only considers the spectral information and neglects the spatial information in the original image. The recent wide interest of applying the multilinear algebra in computer vision sheds light on this problem and raised the idea of nonnegative tensor factorization (NTF). This thesis found that the NTF has more advantages over the NMF when work with medium- rather than the high-spatial-resolution hyperspectral image. Furthermore, this thesis proposed to equip the NTF-based subpixel classification methods with the variations adopted from the NMF. By adopting the variations from the NMF, the urban ISA mapping results from the NTF were improved by ~2%.
Lastly, the problem known as the curse of dimensionality is an obstacle in hyperspectral image applications. The majority of current dimension reduction (DR) methods are restricted to using only the spectral information, when the spatial information is neglected. To overcome this defect, two spectral-spatial methods: patch-based and tensor-patch-based, were thoroughly studied and compared in this thesis. To date, the popularity of the two solutions remains in computer vision studies and their applications in hyperspectral DR are limited. The patch-based and tensor-patch-based variations greatly improved the quality of dimension-reduced hyperspectral images, which then improved the land cover mapping results from them. In addition, this thesis proposed to use an improved method to produce an important intermediate result in the patch-based and tensor-patch-based DR process, which further improved the land cover mapping results
Endmember-Guided Unmixing Network (EGU-Net): A General Deep Learning Framework for Self-Supervised Hyperspectral Unmixing
Over the past decades, enormous efforts have been made to improve the performance of linear or nonlinear mixing models for hyperspectral unmixing (HU), yet their ability to simultaneously generalize various spectral variabilities (SVs) and extract physically meaningful endmembers still remains limited due to the poor ability in data fitting and reconstruction and the sensitivity to various SVs. Inspired by the powerful learning ability of deep learning (DL), we attempt to develop a general DL approach for HU, by fully considering the properties of endmembers extracted from the hyperspectral imagery, called endmember-guided unmixing network (EGU-Net). Beyond the alone autoencoder-like architecture, EGU-Net is a two-stream Siamese deep network, which learns an additional network from the pure or nearly pure endmembers to correct the weights of another unmixing network by sharing network parameters and adding spectrally meaningful constraints (e.g., nonnegativity and sum-to-one) toward a more accurate and interpretable unmixing solution. Furthermore, the resulting general framework is not only limited to pixelwise spectral unmixing but also applicable to spatial information modeling with convolutional operators for spatial–spectral unmixing. Experimental results conducted on three different datasets with the ground truth of abundance maps corresponding to each material demonstrate the effectiveness and superiority of the EGU-Net over state-of-the-art unmixing algorithms. The codes will be available from the website: https://github.com/danfenghong/IEEE_TNNLS_EGU-Net
Recommended from our members
Synergizing human-machine intelligence: Visualizing, labeling, and mining the electronic health record
We live in a world where data surround us in every aspect of our lives. The key challenge for humans and machines is how we can make better use of such data. Imagine what would happen if you were to have intelligent machines that could give you insight into the data. Insight that will enable you to better 1) reason about, 2) learn, and 3) understand the underlying phenomena that produced the data. The possibilities of combined human-machine intelligence are endless and will impact our lives in ways we can not even imagine today.
Synergistic human-machine intelligence aims to facilitate the analytical reasoning and inference process of humans by creating machines that maximize a human's ability to 1) reason about, 2) learn, and 3) understand large, complex, and heterogeneous data. Combined human-machine intelligence is a powerful symbiosis of mutual benefit, in which we depend on the computational capabilities of the machine for the tasks we are not good at, and the machine requires human intervention for the tasks it performs poorly on.
This relationship provides a compelling alternative to either approach in isolation for solving today's and tomorrow's arising data challenges. In his regard, this dissertation proposes a diverse analytical framework that leverages synergistic human-machine intelligence to maximize a human's ability to better 1) reason about, 2) learn, and 3) understand different biomedical imaging and healthcare data present in the patient's electronic health record (EHR). Correspondingly, we approach the data analyses problem from the 1) visualization, 2) labeling, and 3) mining perspective and demonstrate the efficacy of our analytics on specific application scenarios and various data domains.
In the first part of this dissertation we explore the question how we can build intelligent imaging analytics that are commensurate with human capabilities and constraints, specifically for optimizing data visualization and automated labeling workflows. Our journey starts with heuristic rule-based analytical models that are derived from task-specific human knowledge. From this experience, we move on to data-driven analytics, where we adapt and combine the intelligence of the model based on prior information provided by the human and synthetic knowledge learned from partial data observations. Within this realm, we propose a novel Bayesian transductive Markov random field model that requires minimal human intervention and is able to cope with scarce label information to learn and infer object shapes in complex spatial, multimodal, spatio-temporal, and longitudinal data. We then study the question how machines can learn discriminative object representations from dense human provided label information by investigating learning and inference mechanisms that make use of deep learning architectures. The developed analytics can aid visualization and labeling tasks, which enables the interpretation and quantification of clinically relevant image information.
The second part explores the question how we can build data-driven analytics for exploratory analysis in longitudinal event data that are commensurate with human capabilities and constraints. We propose human-intuitive analytics that enable the representation and discovery of interpretable event patterns to ease knowledge absorption and comprehension of the employed analytics model and the underlying data. We propose a novel doubly-constrained convolutional sparse-coding framework that learns interpretable and shift-invariant latent temporal event patterns. We apply the model to mine complex event data in EHRs. By mapping the event space to heterogeneous patient encounters in the EHR we explore the linkage between healthcare resource utilization (HRU) in relation to disease severity. This linkage may help to better understand how disease specific co-morbidities and their clinical attributes incur different HRU patterns. Such insight helps to characterize the patient's care history, which then enables the comparison against clinical practice guidelines, the discovery of prevailing practices based on common HRU group patterns, and the identification of outliers that might indicate poor patient management
Graph-based Data Modeling and Analysis for Data Fusion in Remote Sensing
Hyperspectral imaging provides the capability of increased sensitivity and discrimination over traditional imaging methods by combining standard digital imaging with spectroscopic methods. For each individual pixel in a hyperspectral image (HSI), a continuous spectrum is sampled as the spectral reflectance/radiance signature to facilitate identification of ground cover and surface material. The abundant spectrum knowledge allows all available information from the data to be mined. The superior qualities within hyperspectral imaging allow wide applications such as mineral exploration, agriculture monitoring, and ecological surveillance, etc. The processing of massive high-dimensional HSI datasets is a challenge since many data processing techniques have a computational complexity that grows exponentially with the dimension. Besides, a HSI dataset may contain a limited number of degrees of freedom due to the high correlations between data points and among the spectra. On the other hand, merely taking advantage of the sampled spectrum of individual HSI data point may produce inaccurate results due to the mixed nature of raw HSI data, such as mixed pixels, optical interferences and etc.
Fusion strategies are widely adopted in data processing to achieve better performance, especially in the field of classification and clustering. There are mainly three types of fusion strategies, namely low-level data fusion, intermediate-level feature fusion, and high-level decision fusion. Low-level data fusion combines multi-source data that is expected to be complementary or cooperative. Intermediate-level feature fusion aims at selection and combination of features to remove redundant information. Decision level fusion exploits a set of classifiers to provide more accurate results. The fusion strategies have wide applications including HSI data processing. With the fast development of multiple remote sensing modalities, e.g. Very High Resolution (VHR) optical sensors, LiDAR, etc., fusion of multi-source data can in principal produce more detailed information than each single source. On the other hand, besides the abundant spectral information contained in HSI data, features such as texture and shape may be employed to represent data points from a spatial perspective. Furthermore, feature fusion also includes the strategy of removing redundant and noisy features in the dataset.
One of the major problems in machine learning and pattern recognition is to develop appropriate representations for complex nonlinear data. In HSI processing, a particular data point is usually described as a vector with coordinates corresponding to the intensities measured in the spectral bands. This vector representation permits the application of linear and nonlinear transformations with linear algebra to find an alternative representation of the data. More generally, HSI is multi-dimensional in nature and the vector representation may lose the contextual correlations. Tensor representation provides a more sophisticated modeling technique and a higher-order generalization to linear subspace analysis.
In graph theory, data points can be generalized as nodes with connectivities measured from the proximity of a local neighborhood. The graph-based framework efficiently characterizes the relationships among the data and allows for convenient mathematical manipulation in many applications, such as data clustering, feature extraction, feature selection and data alignment. In this thesis, graph-based approaches applied in the field of multi-source feature and data fusion in remote sensing area are explored. We will mainly investigate the fusion of spatial, spectral and LiDAR information with linear and multilinear algebra under graph-based framework for data clustering and classification problems