1,783 research outputs found
Hypergraph Modelling for Geometric Model Fitting
In this paper, we propose a novel hypergraph based method (called HF) to fit
and segment multi-structural data. The proposed HF formulates the geometric
model fitting problem as a hypergraph partition problem based on a novel
hypergraph model. In the hypergraph model, vertices represent data points and
hyperedges denote model hypotheses. The hypergraph, with large and
"data-determined" degrees of hyperedges, can express the complex relationships
between model hypotheses and data points. In addition, we develop a robust
hypergraph partition algorithm to detect sub-hypergraphs for model fitting. HF
can effectively and efficiently estimate the number of, and the parameters of,
model instances in multi-structural data heavily corrupted with outliers
simultaneously. Experimental results show the advantages of the proposed method
over previous methods on both synthetic data and real images.Comment: Pattern Recognition, 201
Learning from networked examples
Many machine learning algorithms are based on the assumption that training
examples are drawn independently. However, this assumption does not hold
anymore when learning from a networked sample because two or more training
examples may share some common objects, and hence share the features of these
shared objects. We show that the classic approach of ignoring this problem
potentially can have a harmful effect on the accuracy of statistics, and then
consider alternatives. One of these is to only use independent examples,
discarding other information. However, this is clearly suboptimal. We analyze
sample error bounds in this networked setting, providing significantly improved
results. An important component of our approach is formed by efficient sample
weighting schemes, which leads to novel concentration inequalities
Learning and Testing Variable Partitions
Let be a multivariate function from a product set to an
Abelian group . A -partition of with cost is a partition of
the set of variables into non-empty subsets such that is -close to
for some with
respect to a given error metric. We study algorithms for agnostically learning
partitions and testing -partitionability over various groups and error
metrics given query access to . In particular we show that
Given a function that has a -partition of cost , a partition
of cost can be learned in time
for any .
In contrast, for and learning a partition of cost is NP-hard.
When is real-valued and the error metric is the 2-norm, a
2-partition of cost can be learned in time
.
When is -valued and the error metric is Hamming
weight, -partitionability is testable with one-sided error and
non-adaptive queries. We also show that even
two-sided testers require queries when .
This work was motivated by reinforcement learning control tasks in which the
set of control variables can be partitioned. The partitioning reduces the task
into multiple lower-dimensional ones that are relatively easier to learn. Our
second algorithm empirically increases the scores attained over previous
heuristic partitioning methods applied in this context.Comment: Innovations in Theoretical Computer Science (ITCS) 202
Joint Hypergraph Learning and Sparse Regression for Feature Selection
In this paper, we propose a unified framework for improved structure estimation and feature selection. Most existing graph-based feature selection methods utilise a static representation of the structure of the available data based on the Laplacian matrix of a simple graph. Here on the other hand, we perform data structure learning and feature selection simultaneously. To improve the estimation of the manifold representing the structure of the selected features, we use a higher order description of the neighbour- hood structures present in the available data using hypergraph learning. This allows those features which participate in the most significant higher order relations to be se- lected, and the remainder discarded, through a sparsification process. We formulate a single objective function to capture and regularise the hypergraph weight estimation and feature selection processes. Finally, we present an optimization algorithm to re- cover the hyper graph weights and a sparse set of feature selection indicators. This process offers a number of advantages. First, by adjusting the hypergraph weights, we preserve high-order neighborhood relations reflected in the original data, which cannot be modeled by a simple graph. Moreover, our objective function captures the global discriminative structure of the features in the data. Comprehensive experiments on 9 benchmark data sets show that our method achieves statistically significant improve- ment over state-of-art feature selection methods, supporting the effectiveness of the proposed method
Latent Semantic Learning with Structured Sparse Representation for Human Action Recognition
This paper proposes a novel latent semantic learning method for extracting
high-level features (i.e. latent semantics) from a large vocabulary of abundant
mid-level features (i.e. visual keywords) with structured sparse
representation, which can help to bridge the semantic gap in the challenging
task of human action recognition. To discover the manifold structure of
midlevel features, we develop a spectral embedding approach to latent semantic
learning based on L1-graph, without the need to tune any parameter for graph
construction as a key step of manifold learning. More importantly, we construct
the L1-graph with structured sparse representation, which can be obtained by
structured sparse coding with its structured sparsity ensured by novel L1-norm
hypergraph regularization over mid-level features. In the new embedding space,
we learn latent semantics automatically from abundant mid-level features
through spectral clustering. The learnt latent semantics can be readily used
for human action recognition with SVM by defining a histogram intersection
kernel. Different from the traditional latent semantic analysis based on topic
models, our latent semantic learning method can explore the manifold structure
of mid-level features in both L1-graph construction and spectral embedding,
which results in compact but discriminative high-level features. The
experimental results on the commonly used KTH action dataset and unconstrained
YouTube action dataset show the superior performance of our method.Comment: The short version of this paper appears in ICCV 201
- …