37,861 research outputs found
Weakly supervised segment annotation via expectation kernel density estimation
Since the labelling for the positive images/videos is ambiguous in weakly
supervised segment annotation, negative mining based methods that only use the
intra-class information emerge. In these methods, negative instances are
utilized to penalize unknown instances to rank their likelihood of being an
object, which can be considered as a voting in terms of similarity. However,
these methods 1) ignore the information contained in positive bags, 2) only
rank the likelihood but cannot generate an explicit decision function. In this
paper, we propose a voting scheme involving not only the definite negative
instances but also the ambiguous positive instances to make use of the extra
useful information in the weakly labelled positive bags. In the scheme, each
instance votes for its label with a magnitude arising from the similarity, and
the ambiguous positive instances are assigned soft labels that are iteratively
updated during the voting. It overcomes the limitations of voting using only
the negative bags. We also propose an expectation kernel density estimation
(eKDE) algorithm to gain further insight into the voting mechanism.
Experimental results demonstrate the superiority of our scheme beyond the
baselines.Comment: 9 pages, 2 figure
Event-Based Modeling with High-Dimensional Imaging Biomarkers for Estimating Spatial Progression of Dementia
Event-based models (EBM) are a class of disease progression models that can
be used to estimate temporal ordering of neuropathological changes from
cross-sectional data. Current EBMs only handle scalar biomarkers, such as
regional volumes, as inputs. However, regional aggregates are a crude summary
of the underlying high-resolution images, potentially limiting the accuracy of
EBM. Therefore, we propose a novel method that exploits high-dimensional
voxel-wise imaging biomarkers: n-dimensional discriminative EBM (nDEBM). nDEBM
is based on an insight that mixture modeling, which is a key element of
conventional EBMs, can be replaced by a more scalable semi-supervised support
vector machine (SVM) approach. This SVM is used to estimate the degree of
abnormality of each region which is then used to obtain subject-specific
disease progression patterns. These patterns are in turn used for estimating
the mean ordering by fitting a generalized Mallows model. In order to validate
the biomarker ordering obtained using nDEBM, we also present a framework for
Simulation of Imaging Biomarkers' Temporal Evolution (SImBioTE) that mimics
neurodegeneration in brain regions. SImBioTE trains variational auto-encoders
(VAE) in different brain regions independently to simulate images at varying
stages of disease progression. We also validate nDEBM clinically using data
from the Alzheimer's Disease Neuroimaging Initiative (ADNI). In both
experiments, nDEBM using high-dimensional features gave better performance than
state-of-the-art EBM methods using regional volume biomarkers. This suggests
that nDEBM is a promising approach for disease progression modeling.Comment: IPMI 201
Minimum Density Hyperplanes
Associating distinct groups of objects (clusters) with contiguous regions of
high probability density (high-density clusters), is central to many
statistical and machine learning approaches to the classification of unlabelled
data. We propose a novel hyperplane classifier for clustering and
semi-supervised classification which is motivated by this objective. The
proposed minimum density hyperplane minimises the integral of the empirical
probability density function along it, thereby avoiding intersection with high
density clusters. We show that the minimum density and the maximum margin
hyperplanes are asymptotically equivalent, thus linking this approach to
maximum margin clustering and semi-supervised support vector classifiers. We
propose a projection pursuit formulation of the associated optimisation problem
which allows us to find minimum density hyperplanes efficiently in practice,
and evaluate its performance on a range of benchmark datasets. The proposed
approach is found to be very competitive with state of the art methods for
clustering and semi-supervised classification
Shortest path distance in random k-nearest neighbor graphs
Consider a weighted or unweighted k-nearest neighbor graph that has been
built on n data points drawn randomly according to some density p on R^d. We
study the convergence of the shortest path distance in such graphs as the
sample size tends to infinity. We prove that for unweighted kNN graphs, this
distance converges to an unpleasant distance function on the underlying space
whose properties are detrimental to machine learning. We also study the
behavior of the shortest path distance in weighted kNN graphs.Comment: Appears in Proceedings of the 29th International Conference on
Machine Learning (ICML 2012
- …