127 research outputs found
Simultaneous Spectral-Spatial Feature Selection and Extraction for Hyperspectral Images
In hyperspectral remote sensing data mining, it is important to take into
account of both spectral and spatial information, such as the spectral
signature, texture feature and morphological property, to improve the
performances, e.g., the image classification accuracy. In a feature
representation point of view, a nature approach to handle this situation is to
concatenate the spectral and spatial features into a single but high
dimensional vector and then apply a certain dimension reduction technique
directly on that concatenated vector before feed it into the subsequent
classifier. However, multiple features from various domains definitely have
different physical meanings and statistical properties, and thus such
concatenation hasn't efficiently explore the complementary properties among
different features, which should benefit for boost the feature
discriminability. Furthermore, it is also difficult to interpret the
transformed results of the concatenated vector. Consequently, finding a
physically meaningful consensus low dimensional feature representation of
original multiple features is still a challenging task. In order to address the
these issues, we propose a novel feature learning framework, i.e., the
simultaneous spectral-spatial feature selection and extraction algorithm, for
hyperspectral images spectral-spatial feature representation and
classification. Specifically, the proposed method learns a latent low
dimensional subspace by projecting the spectral-spatial feature into a common
feature space, where the complementary information has been effectively
exploited, and simultaneously, only the most significant original features have
been transformed. Encouraging experimental results on three public available
hyperspectral remote sensing datasets confirm that our proposed method is
effective and efficient
Recommended from our members
Constraint based approaches to interpretable and semi-supervised machine learning
Interpretability and Explainability of machine learning algorithms are becoming increasingly important as Machine Learning (ML) systems get widely applied to domains like clinical healthcare, social media and governance. A related major challenge in deploying ML systems pertains to reliable learning when expert annotation is severely limited. This dissertation prescribes a common framework to address these challenges, based on the use of constraints that can make an ML model more interpretable, lead to novel methods for explaining ML models, or help to learn reliably with limited supervision.
In particular, we focus on the class of latent variable models and develop a general learning framework by constraining realizations of latent variables and/or model parameters. We propose specific constraints that can be used to develop identifiable latent variable models, that in turn learn interpretable outcomes. The proposed framework is first used in Non–negative Matrix Factorization and Probabilistic Graphical Models. For both models, algorithms are proposed to incorporate such constraints with seamless and tractable augmentation of the associated learning and inference procedures. The utility of the proposed methods is demonstrated for our working application domain – identifiable phenotyping using Electronic Health Records (EHRs). Evaluation by domain experts reveals that the proposed models are indeed more clinically relevant (and hence more interpretable) than existing counterparts. The work also demonstrates that while there may be inherent trade–offs between constraining models to encourage interpretability, the quantitative performance of downstream tasks remains competitive.
We then focus on constraint based mechanisms to explain decisions or outcomes of supervised black-box models. We propose an explanation model based on generating examples where the nature of the examples is constrained i.e. they have to be sampled from the underlying data domain. To do so, we train a generative model to characterize the data manifold in a high dimensional ambient space. Constrained sampling then allows us to generate naturalistic examples that lie along the data manifold. We propose ways to summarize model behavior using such constrained examples.
In the last part of the contributions, we argue that heterogeneity of data sources is useful in situations where very little to no supervision is available. This thesis leverages such heterogeneity (via constraints) for two critical but widely different machine learning algorithms. In each case, a novel algorithm in the sub-class of co–regularization is developed to combine information from heterogeneous sources. Co–regularization is a framework of constraining latent variables and/or latent distributions in order to leverage heterogeneity. The proposed algorithms are utilized for clustering, where the intent is to generate a partition or grouping of observed samples, and for Learning to Rank algorithms – used to rank a set of observed samples in order of preference with respect to a specific search query. The proposed methods are evaluated on clustering web documents, social network users, and information retrieval applications for ranking search queries.Electrical and Computer Engineerin
Multiview pattern recognition methods for data visualization, embedding and clustering
Multiview data is defined as data for whose samples there exist several different data views, i.e. different data matrices obtained through different experiments, methods or situations. Multiview dimensionality reduction methods transform a highÂdimensional, multiview dataset into a single, low-dimensional space or projection. Their goal is to provide a more manageable representation of the original data, either for data visualization or to simplify the following analysis stages. Multiview clustering methods receive a multiview dataset and propose a single clustering assignment of the data samples in the dataset, considering the information from all the input data views.
The main hypothesis defended in this work is that using multiview data along with methods able to exploit their information richness produces better dimensionality reduction and clustering results than simply using single views or concatenating all views into a single matrix.
Consequently, the objectives of this thesis are to develop and test multiview pattern recognition methods based on well known single-view dimensionality reduction and clustering methods. Three multiview pattern recognition methods are presented: multiview t-distributed stochastic neighbourhood embedding (MV-tSNE), multiview multimodal scaling (MV-MDS) and a novel formulation of multiview spectral clustering (MVSC-CEV). These methods can be applied both to dimensionality reduction tasks and to clustering tasks.
The MV-tSNE method computes a matrix of probabilities based on distances between sam ples for each input view. Then it merges the different probability matrices using results from expert opinion pooling theory to get a common matrix of probabilities, which is then used as reference to build a low-dimensional projection of the data whose probabilities are similar.
The MV-MDS method computes the common eigenvectors of all the normalized distance matrices in order to obtain a single low-dimensional space that embeds the essential information from all the input spaces, avoiding redundant information to be included.
The MVSC-CEV method computes the symmetric Laplacian matrices of the similaritymatrices of all data views. Then it generates a single, low-dimensional representation of the input data by computing the common eigenvectors of the Laplacian matrices, obtaining a projection of the data that embeds the most relevan! information of the input data views, also avoiding the addition of redundant information.
A thorough set of experiments has been designed and run in order to compare the proposed methods with their single view counterpart. Also, the proposed methods have been compared with all the available results of equivalent methods in the state of the art. Finally, a comparison between the three proposed methods is presented in order to provide guidelines on which method to use for a given task.
MVSC-CEV consistently produces better clustering results than other multiview methods in the state of the art. MV-MDS produces overall better results than the reference methods in dimensionality reduction experiments. MV-tSNE does not excel on any of these tasks. As a consequence, for multiview clustering tasks it is recommended to use MVSC-CEV, and MV-MDS for multiview dimensionality reduction tasks.
Although several multiview dimensionality reduction or clustering methods have been proposed in the state of the art, there is no software implementation available. In order to compensate for this fact and to provide the communitywith a potentially useful set of multiview pattern recognition methods, an R software package containg the proposed methods has been developed and released to the public.Los datos multivista se definen como aquellos datos para cuyas muestras existen varias vistas de datos distintas , es decir diferentes matrices de datos obtenidas mediante diferentes experimentos , métodos o situaciones. Los métodos multivista de reducción de la dimensionalidad transforman un conjunto de datos multivista y de alta dimensionalidad en un único espacio o proyección de baja dimensionalidad. Su objetivo es producir una representación más manejable de los datos originales, bien para su visualización o para simplificar las etapas de análisis subsiguientes. Los métodos de agrupamiento multivista reciben un conjunto de datos multivista y proponen una única asignación de grupos para sus muestras, considerando la información de todas las vistas de datos de entrada. La principal hipótesis defendida en este trabajo es que el uso de datos multivista junto con métodos capaces de aprovechar su riqueza informativa producen mejores resultados en reducción de la dimensionalidad y agrupamiento frente al uso de vistas únicas o la concatenación de varias vistas en una única matriz. Por lo tanto, los objetivos de esta tesis son desarrollar y probar métodos multivista de reconocimiento de patrones basados en métodos univista reconocidos. Se presentan tres métodos multivista de reconocimiento de patrones: proyección estocástica de vecinos multivista (MV-tSNE), escalado multidimensional multivista (MV-MDS) y una nueva formulación de agrupamiento espectral multivista (MVSC-CEV). Estos métodos pueden aplicarse tanto a tareas de reducción de la dimensionalidad como a de agrupamiento. MV-tSNE calcula una matriz de probabilidades basada en distancias entre muestras para cada vista de datos. A continuación combina las matrices de probabilidad usando resultados de la teorÃa de combinación de expertos para obtener una matriz común de probabilidades, que se usa como referencia para construir una proyección de baja dimensionalidad de los datos. MV-MDS calcula los vectores propios comunes de todas las matrices normalizadas de distancia para obtener un único espacio de baja dimensionalidad que integre la información esencial de todos los espacios de entrada, evitando información redundante. MVSC-CEVcalcula las matrices Laplacianas de las matrices de similitud de los datos. A continuación genera una única representación de baja dimensionalidad calculando los vectores propios comunes de las Laplacianas. Asà obtiene una proyección de los datos que integra la información más relevante y evita añadir información redundante. Se ha diseñado y ejecutado una baterÃa de experimentos completa para comparar los métodos propuestos con sus equivalentes univista. Además los métodos propuestos se han comparado con los resultados disponibles en la literatura. Finalmente, se presenta una comparación entre los tres métodos para proporcionar orientaciones sobre el método más adecuado para cada tarea. MVSC-CEV produce mejores agrupamientos que los métodos equivalentes en la literatura. MV-MDS produce en general mejores resultados que los métodos de referencia en experimentos de reducción de la dimensionalidad. MV-tSNE no destaca en ninguna de esas tareas . Consecuentemente , para agrupamiento multivista se recomienda usar MVSC-CEV, y para reducción de la dimensionalidad multivista MV-MDS. Aunque se han propuesto varios métodos multivista en la literatura, no existen programas disponibles públicamente. Para remediar este hecho y para dotar a la comunidad de un conjunto de métodos potencialmente útil, se ha desarrollado un paquete de programas en R y se ha puesto a disposición del público
New Approaches in Multi-View Clustering
Many real-world datasets can be naturally described by multiple views. Due to this, multi-view learning has drawn much attention from both academia and industry. Compared to single-view learning, multi-view learning has demonstrated plenty of advantages. Clustering has long been serving as a critical technique in data mining and machine learning. Recently, multi-view clustering has achieved great success in various applications. To provide a comprehensive review of the typical multi-view clustering methods and their corresponding recent developments, this chapter summarizes five kinds of popular clustering methods and their multi-view learning versions, which include k-means, spectral clustering, matrix factorization, tensor decomposition, and deep learning. These clustering methods are the most widely employed algorithms for single-view data, and lots of efforts have been devoted to extending them for multi-view clustering. Besides, many other multi-view clustering methods can be unified into the frameworks of these five methods. To promote further research and development of multi-view clustering, some popular and open datasets are summarized in two categories. Furthermore, several open issues that deserve more exploration are pointed out in the end
Unsupervised spectral sub-feature learning for hyperspectral image classification
Spectral pixel classification is one of the principal techniques used in hyperspectral image (HSI) analysis. In this article, we propose an unsupervised feature learning method for classification of hyperspectral images. The proposed method learns a dictionary of sub-feature basis representations from the spectral domain, which allows effective use of the correlated spectral data. The learned dictionary is then used in encoding convolutional samples from the hyperspectral input pixels to an expanded but sparse feature space. Expanded hyperspectral feature representations enable linear separation between object classes present in an image. To evaluate the proposed method, we performed experiments on several commonly used HSI data sets acquired at different locations and by different sensors. Our experimental results show that the proposed method outperforms other pixel-wise classification methods that make use of unsupervised feature extraction approaches. Additionally, even though our approach does not use any prior knowledge, or labelled training data to learn features, it yields either advantageous, or comparable, results in terms of classification accuracy with respect to recent semi-supervised methods
Deep Clustering: A Comprehensive Survey
Cluster analysis plays an indispensable role in machine learning and data
mining. Learning a good data representation is crucial for clustering
algorithms. Recently, deep clustering, which can learn clustering-friendly
representations using deep neural networks, has been broadly applied in a wide
range of clustering tasks. Existing surveys for deep clustering mainly focus on
the single-view fields and the network architectures, ignoring the complex
application scenarios of clustering. To address this issue, in this paper we
provide a comprehensive survey for deep clustering in views of data sources.
With different data sources and initial conditions, we systematically
distinguish the clustering methods in terms of methodology, prior knowledge,
and architecture. Concretely, deep clustering methods are introduced according
to four categories, i.e., traditional single-view deep clustering,
semi-supervised deep clustering, deep multi-view clustering, and deep transfer
clustering. Finally, we discuss the open challenges and potential future
opportunities in different fields of deep clustering
Ensemble Joint Sparse Low Rank Matrix Decomposition for Thermography Diagnosis System
Composite is widely used in the aircraft industry and it is essential for manufacturers to monitor its health and quality. The most commonly found defects of composite are debonds and delamination. Different inner defects with complex irregular shape is difficult to be diagnosed by using conventional thermal imaging methods. In this paper, an ensemble joint sparse low rank matrix decomposition (EJSLRMD) algorithm is proposed by applying the optical pulse thermography (OPT) diagnosis system. The proposed algorithm jointly models the low rank and sparse pattern by using concatenated feature space. In particular, the weak defects information can be separated from strong noise and the resolution contrast of the defects has significantly been improved. Ensemble iterative sparse modelling are conducted to further enhance the weak information as well as reducing the computational cost. In order to show the robustness and efficacy of the model, experiments are conducted to detect the inner debond on multiple carbon fiber reinforced polymer (CFRP) composites. A comparative analysis is presented with general OPT algorithms. Not withstand above, the proposed model has been evaluated on synthetic data and compared with other low rank and sparse matrix decomposition algorithms
- …