92 research outputs found
Graph Embedding with Data Uncertainty
spectral-based subspace learning is a common data preprocessing step in many
machine learning pipelines. The main aim is to learn a meaningful low
dimensional embedding of the data. However, most subspace learning methods do
not take into consideration possible measurement inaccuracies or artifacts that
can lead to data with high uncertainty. Thus, learning directly from raw data
can be misleading and can negatively impact the accuracy. In this paper, we
propose to model artifacts in training data using probability distributions;
each data point is represented by a Gaussian distribution centered at the
original data point and having a variance modeling its uncertainty. We
reformulate the Graph Embedding framework to make it suitable for learning from
distributions and we study as special cases the Linear Discriminant Analysis
and the Marginal Fisher Analysis techniques. Furthermore, we propose two
schemes for modeling data uncertainty based on pair-wise distances in an
unsupervised and a supervised contexts.Comment: 20 pages, 4 figure
Learning with Multiple Similarities
The notion of similarities between data points is central to many classification and clustering algorithms. We often encounter situations when there are more than one set of pairwise similarity graphs between objects, either arising from different measures of similarity between objects or from a single similarity measure defined on multiple data representations, or a combination of these. Such examples can be found in various applications in computer vision, natural language processing and computational biology.
Combining information from these multiple sources is often beneficial in learning meaningful concepts from data.
This dissertation proposes novel methods to effectively fuse information from these multiple similarity graphs, targeted towards two fundamental tasks in machine learning - classification and clustering. In particular, I propose two models for learning spectral embedding from multiple similarity graphs using ideas from co-training and co-regularization. Further, I propose a novel approach to the problem of multiple kernel learning (MKL), converting it to a more familiar problem of binary classification in a transformed space. The proposed MKL approach learns a ``good'' linear combination of base kernels by optimizing a quality criterion that is justified both empirically and theoretically. The ideas of the proposed MKL method are also extended to learning nonlinear combinations of kernels, in particular, polynomial kernel combination and more general nonlinear kernel combination using random forests
- …