18,644 research outputs found
Hybrid Models with Deep and Invertible Features
We propose a neural hybrid model consisting of a linear model defined on a
set of features computed by a deep, invertible transformation (i.e. a
normalizing flow). An attractive property of our model is that both
p(features), the density of the features, and p(targets | features), the
predictive distribution, can be computed exactly in a single feed-forward pass.
We show that our hybrid model, despite the invertibility constraints, achieves
similar accuracy to purely predictive models. Moreover the generative component
remains a good model of the input features despite the hybrid optimization
objective. This offers additional capabilities such as detection of
out-of-distribution inputs and enabling semi-supervised learning. The
availability of the exact joint density p(targets, features) also allows us to
compute many quantities readily, making our hybrid model a useful building
block for downstream applications of probabilistic deep learning.Comment: ICML 201
Semi-supervised cross-entropy clustering with information bottleneck constraint
In this paper, we propose a semi-supervised clustering method, CEC-IB, that
models data with a set of Gaussian distributions and that retrieves clusters
based on a partial labeling provided by the user (partition-level side
information). By combining the ideas from cross-entropy clustering (CEC) with
those from the information bottleneck method (IB), our method trades between
three conflicting goals: the accuracy with which the data set is modeled, the
simplicity of the model, and the consistency of the clustering with side
information. Experiments demonstrate that CEC-IB has a performance comparable
to Gaussian mixture models (GMM) in a classical semi-supervised scenario, but
is faster, more robust to noisy labels, automatically determines the optimal
number of clusters, and performs well when not all classes are present in the
side information. Moreover, in contrast to other semi-supervised models, it can
be successfully applied in discovering natural subgroups if the partition-level
side information is derived from the top levels of a hierarchical clustering
Optimal Transport for Domain Adaptation
Domain adaptation from one data space (or domain) to another is one of the
most challenging tasks of modern data analytics. If the adaptation is done
correctly, models built on a specific data space become more robust when
confronted to data depicting the same semantic concepts (the classes), but
observed by another observation system with its own specificities. Among the
many strategies proposed to adapt a domain to another, finding a common
representation has shown excellent properties: by finding a common
representation for both domains, a single classifier can be effective in both
and use labelled samples from the source domain to predict the unlabelled
samples of the target domain. In this paper, we propose a regularized
unsupervised optimal transportation model to perform the alignment of the
representations in the source and target domains. We learn a transportation
plan matching both PDFs, which constrains labelled samples in the source domain
to remain close during transport. This way, we exploit at the same time the few
labeled information in the source and the unlabelled distributions observed in
both domains. Experiments in toy and challenging real visual adaptation
examples show the interest of the method, that consistently outperforms state
of the art approaches
- …