432 research outputs found
Unsupervised feature learning with discriminative encoder
In recent years, deep discriminative models have achieved extraordinary
performance on supervised learning tasks, significantly outperforming their
generative counterparts. However, their success relies on the presence of a
large amount of labeled data. How can one use the same discriminative models
for learning useful features in the absence of labels? We address this question
in this paper, by jointly modeling the distribution of data and latent features
in a manner that explicitly assigns zero probability to unobserved data. Rather
than maximizing the marginal probability of observed data, we maximize the
joint probability of the data and the latent features using a two step EM-like
procedure. To prevent the model from overfitting to our initial selection of
latent features, we use adversarial regularization. Depending on the task, we
allow the latent features to be one-hot or real-valued vectors and define a
suitable prior on the features. For instance, one-hot features correspond to
class labels and are directly used for the unsupervised and semi-supervised
classification task, whereas real-valued feature vectors are fed as input to
simple classifiers for auxiliary supervised discrimination tasks. The proposed
model, which we dub discriminative encoder (or DisCoder), is flexible in the
type of latent features that it can capture. The proposed model achieves
state-of-the-art performance on several challenging tasks.Comment: 10 pages, 4 figures, International Conference on Data Mining, 201
Comprehensive Border Bases for Zero Dimensional Parametric Polynomial Ideals
In this paper, we extend the idea of comprehensive Gr\"{o}bner bases given by
Weispfenning (1992) to border bases for zero dimensional parametric polynomial
ideals. For this, we introduce a notion of comprehensive border bases and
border system, and prove their existence even in the cases where they do not
correspond to any term order. We further present algorithms to compute
comprehensive border bases and border system. Finally, we study the relation
between comprehensive Gr\"{o}bner bases and comprehensive border bases w.r.t. a
term order and give an algorithm to compute such comprehensive border bases
from comprehensive Gr\"{o}bner bases.Comment: 15 pages, 8 sections and 3 algorithm
Learning to segment with image-level supervision
Deep convolutional networks have achieved the state-of-the-art for semantic
image segmentation tasks. However, training these networks requires access to
densely labeled images, which are known to be very expensive to obtain. On the
other hand, the web provides an almost unlimited source of images annotated at
the image level. How can one utilize this much larger weakly annotated set for
tasks that require dense labeling? Prior work often relied on localization
cues, such as saliency maps, objectness priors, bounding boxes etc., to address
this challenging problem. In this paper, we propose a model that generates
auxiliary labels for each image, while simultaneously forcing the output of the
CNN to satisfy the mean-field constraints imposed by a conditional random
field. We show that one can enforce the CRF constraints by forcing the
distribution at each pixel to be close to the distribution of its neighbors.
This is in stark contrast with methods that compute a recursive expansion of
the mean-field distribution using a recurrent architecture and train the
resultant distribution. Instead, the proposed model adds an extra loss term to
the output of the CNN, and hence, is faster than recursive implementations. We
achieve the state-of-the-art for weakly supervised semantic image segmentation
on VOC 2012 dataset, assuming no manually labeled pixel level information is
available. Furthermore, the incorporation of conditional random fields in CNN
incurs little extra time during training.Comment: Published in WACV 201
Consistency of Spectral Hypergraph Partitioning under Planted Partition Model
Hypergraph partitioning lies at the heart of a number of problems in machine
learning and network sciences. Many algorithms for hypergraph partitioning have
been proposed that extend standard approaches for graph partitioning to the
case of hypergraphs. However, theoretical aspects of such methods have seldom
received attention in the literature as compared to the extensive studies on
the guarantees of graph partitioning. For instance, consistency results of
spectral graph partitioning under the stochastic block model are well known. In
this paper, we present a planted partition model for sparse random non-uniform
hypergraphs that generalizes the stochastic block model. We derive an error
bound for a spectral hypergraph partitioning algorithm under this model using
matrix concentration inequalities. To the best of our knowledge, this is the
first consistency result related to partitioning non-uniform hypergraphs.Comment: 35 pages, 2 figures, 1 tabl
- …