1,496 research outputs found
LASS: a simple assignment model with Laplacian smoothing
We consider the problem of learning soft assignments of items to
categories given two sources of information: an item-category similarity
matrix, which encourages items to be assigned to categories they are similar to
(and to not be assigned to categories they are dissimilar to), and an item-item
similarity matrix, which encourages similar items to have similar assignments.
We propose a simple quadratic programming model that captures this intuition.
We give necessary conditions for its solution to be unique, define an
out-of-sample mapping, and derive a simple, effective training algorithm based
on the alternating direction method of multipliers. The model predicts
reasonable assignments from even a few similarity values, and can be seen as a
generalization of semisupervised learning. It is particularly useful when items
naturally belong to multiple categories, as for example when annotating
documents with keywords or pictures with tags, with partially tagged items, or
when the categories have complex interrelations (e.g. hierarchical) that are
unknown.Comment: 20 pages, 4 figures. A shorter version appears in AAAI 201
A Generally Semisupervised Dimensionality Reduction Method with Local and Global Regression Regularizations for Recognition
The insufficiency of labeled data is an important problem in image classification such as face recognition. However, unlabeled data are abundant in the real-world application. Therefore, semisupervised learning methods, which corporate a few labeled data and a large number of unlabeled data into learning, have received more and more attention in the field of face recognition. During the past years, graph-based semisupervised learning has been becoming a popular topic in the area of semisupervised learning. In this chapter, we newly present graph-based semisupervised learning method for face recognition. The presented method is based on local and global regression regularization. The local regression regularization has adopted a set of local classification functions to preserve both local discriminative and geometrical information, as well as to reduce the bias of outliers and handle imbalanced data; while the global regression regularization is to preserve the global discriminative information and to calculate the projection matrix for out-of-sample extrapolation. Extensive simulations based on synthetic and real-world datasets verify the effectiveness of the proposed method
Learning to Transform Time Series with a Few Examples
We describe a semi-supervised regression algorithm that learns to transform one time series into another time series given examples of the transformation. This algorithm is applied to tracking, where a time series of observations from sensors is transformed to a time series describing the pose of a target. Instead of defining and implementing such transformations for each tracking task separately, our algorithm learns a memoryless transformation of time series from a few example input-output mappings. The algorithm searches for a smooth function that fits the training examples and, when applied to the input time series, produces a time series that evolves according to assumed dynamics. The learning procedure is fast and lends itself to a closed-form solution. It is closely related to nonlinear system identification and manifold learning techniques. We demonstrate our algorithm on the tasks of tracking RFID tags from signal strength measurements, recovering the pose of rigid objects, deformable bodies, and articulated bodies from video sequences. For these tasks, this algorithm requires significantly fewer examples compared to fully-supervised regression algorithms or semi-supervised learning algorithms that do not take the dynamics of the output time series into account
Density-sensitive semisupervised inference
Semisupervised methods are techniques for using labeled data
together with unlabeled data
to make predictions. These methods invoke some assumptions that link the
marginal distribution of X to the regression function f(x). For example,
it is common to assume that f is very smooth over high density regions of
. Many of the methods are ad-hoc and have been shown to work in specific
examples but are lacking a theoretical foundation. We provide a minimax
framework for analyzing semisupervised methods. In particular, we study methods
based on metrics that are sensitive to the distribution . Our model
includes a parameter that controls the strength of the semisupervised
assumption. We then use the data to adapt to .Comment: Published in at http://dx.doi.org/10.1214/13-AOS1092 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Asymptotic Analysis of Generative Semi-Supervised Learning
Semisupervised learning has emerged as a popular framework for improving
modeling accuracy while controlling labeling cost. Based on an extension of
stochastic composite likelihood we quantify the asymptotic accuracy of
generative semi-supervised learning. In doing so, we complement
distribution-free analysis by providing an alternative framework to measure the
value associated with different labeling policies and resolve the fundamental
question of how much data to label and in what manner. We demonstrate our
approach with both simulation studies and real world experiments using naive
Bayes for text classification and MRFs and CRFs for structured prediction in
NLP.Comment: 12 pages, 9 figure
Developments in the theory of randomized shortest paths with a comparison of graph node distances
There have lately been several suggestions for parametrized distances on a
graph that generalize the shortest path distance and the commute time or
resistance distance. The need for developing such distances has risen from the
observation that the above-mentioned common distances in many situations fail
to take into account the global structure of the graph. In this article, we
develop the theory of one family of graph node distances, known as the
randomized shortest path dissimilarity, which has its foundation in statistical
physics. We show that the randomized shortest path dissimilarity can be easily
computed in closed form for all pairs of nodes of a graph. Moreover, we come up
with a new definition of a distance measure that we call the free energy
distance. The free energy distance can be seen as an upgrade of the randomized
shortest path dissimilarity as it defines a metric, in addition to which it
satisfies the graph-geodetic property. The derivation and computation of the
free energy distance are also straightforward. We then make a comparison
between a set of generalized distances that interpolate between the shortest
path distance and the commute time, or resistance distance. This comparison
focuses on the applicability of the distances in graph node clustering and
classification. The comparison, in general, shows that the parametrized
distances perform well in the tasks. In particular, we see that the results
obtained with the free energy distance are among the best in all the
experiments.Comment: 30 pages, 4 figures, 3 table
- …