46 research outputs found
Discriminative Clustering by Regularized Information Maximization
Is there a principled way to learn a probabilistic discriminative classifier from an unlabeled data set? We present a framework that simultaneously clusters the data
and trains a discriminative classifier. We call it Regularized Information Maximization (RIM). RIM optimizes an intuitive information-theoretic objective function
which balances class separation, class balance and classifier complexity. The approach can flexibly incorporate different likelihood functions, express prior assumptions about the relative size of different classes and incorporate partial labels for semi-supervised learning. In particular, we instantiate the framework to unsupervised, multi-class kernelized logistic regression. Our empirical evaluation indicates that RIM outperforms existing methods on several real data sets, and
demonstrates that RIM is an effective model selection method
Cumulative object categorization in clutter
In this paper we present an approach based on scene- or part-graphs for geometrically categorizing touching and
occluded objects. We use additive RGBD feature descriptors and hashing of graph configuration parameters for describing the spatial arrangement of constituent parts. The presented experiments quantify that this method outperforms our earlier part-voting and sliding window classification. We evaluated our approach on cluttered scenes, and by using a 3D dataset containing over 15000 Kinect scans of over 100 objects which were grouped into general geometric categories. Additionally, color, geometric, and combined features were compared for categorization tasks
MiniMax Entropy Network: Learning Category-Invariant Features for Domain Adaptation
How to effectively learn from unlabeled data from the target domain is
crucial for domain adaptation, as it helps reduce the large performance gap due
to domain shift or distribution change. In this paper, we propose an
easy-to-implement method dubbed MiniMax Entropy Networks (MMEN) based on
adversarial learning. Unlike most existing approaches which employ a generator
to deal with domain difference, MMEN focuses on learning the categorical
information from unlabeled target samples with the help of labeled source
samples. Specifically, we set an unfair multi-class classifier named
categorical discriminator, which classifies source samples accurately but be
confused about the categories of target samples. The generator learns a common
subspace that aligns the unlabeled samples based on the target pseudo-labels.
For MMEN, we also provide theoretical explanations to show that the learning of
feature alignment reduces domain mismatch at the category level. Experimental
results on various benchmark datasets demonstrate the effectiveness of our
method over existing state-of-the-art baselines.Comment: 8 pages, 6 figure
Input and Weight Space Smoothing for Semi-supervised Learning
We propose regularizing the empirical loss for semi-supervised learning by
acting on both the input (data) space, and the weight (parameter) space. We
show that the two are not equivalent, and in fact are complementary, one
affecting the minimality of the resulting representation, the other
insensitivity to nuisance variability. We propose a method to perform such
smoothing, which combines known input-space smoothing with a novel weight-space
smoothing, based on a min-max (adversarial) optimization. The resulting
Adversarial Block Coordinate Descent (ABCD) algorithm performs gradient ascent
with a small learning rate for a random subset of the weights, and standard
gradient descent on the remaining weights in the same mini-batch. It achieves
comparable performance to the state-of-the-art without resorting to heavy data
augmentation, using a relatively simple architecture