1,320 research outputs found
Multi-Channel Stochastic Variational Inference for the Joint Analysis of Heterogeneous Biomedical Data in Alzheimer's Disease
The joint analysis of biomedical data in Alzheimer's Disease (AD) is
important for better clinical diagnosis and to understand the relationship
between biomarkers. However, jointly accounting for heterogeneous measures
poses important challenges related to the modeling of the variability and the
interpretability of the results. These issues are here addressed by proposing a
novel multi-channel stochastic generative model. We assume that a latent
variable generates the data observed through different channels (e.g., clinical
scores, imaging, ...) and describe an efficient way to estimate jointly the
distribution of both latent variable and data generative process. Experiments
on synthetic data show that the multi-channel formulation allows superior data
reconstruction as opposed to the single channel one. Moreover, the derived
lower bound of the model evidence represents a promising model selection
criterion. Experiments on AD data show that the model parameters can be used
for unsupervised patient stratification and for the joint interpretation of the
heterogeneous observations. Because of its general and flexible formulation, we
believe that the proposed method can find important applications as a general
data fusion technique.Comment: accepted for presentation at MLCN 2018 workshop, in Conjunction with
MICCAI 2018, September 20, Granada, Spai
Learning metrics and discriminative clustering
In this work methods have been developed to extract relevant information from large, multivariate data sets in a flexible, nonlinear way. The techniques are applicable especially at the initial, explorative phase of data analysis, in cases where an explicit indicator of relevance is available as part of the data set.
The unsupervised learning methods, popular in data exploration, often rely on a distance measure defined for data items. Selection of the distance measure, part of which is feature selection, is therefore fundamentally important.
The learning metrics principle is introduced to complement manual feature selection by enabling automatic modification of a distance measure on the basis of available relevance information. Two applications of the principle are developed. The first emphasizes relevant aspects of the data by directly modifying distances between data items, and is usable, for example, in information visualization with the self-organizing maps. The other method, discriminative clustering, finds clusters that are internally homogeneous with respect to the interesting variation of the data. The techniques have been applied to text document analysis, gene expression clustering, and charting the bankruptcy sensitivity of companies.
In the first, more straightforward approach, a new local metric of the data space measures changes in the conditional distribution of the relevance-indicating data by the Fisher information matrix, a local approximation of the Kullback-Leibler distance. Discriminative clustering, on the other hand, directly minimizes a Kullback-Leibler based distortion measure within the clusters, or equivalently maximizes the mutual information between the clusters and the relevance indicator. A finite-data algorithm for discriminative clustering is also presented. It maximizes a partially marginalized posterior probability of the model and is asymptotically equivalent to maximizing mutual information.reviewe
Data exploration with learning metrics
A crucial problem in exploratory analysis of data is that it is difficult for computational methods to focus on interesting aspects of data. Traditional methods of unsupervised learning cannot differentiate between interesting and noninteresting variation, and hence may model, visualize, or cluster parts of data that are not interesting to the analyst. This wastes the computational power of the methods and may mislead the analyst.
In this thesis, a principle called "learning metrics" is used to develop visualization and clustering methods that automatically focus on the interesting aspects, based on auxiliary labels supplied with the data samples. The principle yields non-Euclidean (Riemannian) metrics that are data-driven, widely applicable, versatile, invariant to many transformations, and in part invariant to noise.
Learning metric methods are introduced for five tasks: nonlinear visualization by Self-Organizing Maps and Multidimensional Scaling, linear projection, and clustering of discrete data and multinomial distributions. The resulting methods either explicitly estimate distances in the Riemannian metric, or optimize a tailored cost function which is implicitly related to such a metric. The methods have rigorous theoretical relationships to information geometry and probabilistic modeling, and are empirically shown to yield good practical results in exploratory and information retrieval tasks.reviewe
- …