1,356 research outputs found
Application of the Fisher-Rao metric to ellipse detection
The parameter space for the ellipses in a two dimensional image is a five dimensional manifold, where each point of the manifold corresponds to an ellipse in the image. The parameter space becomes a Riemannian manifold under a Fisher-Rao metric, which is derived from a Gaussian model for the blurring of ellipses in the image. Two points in the parameter space are close together under the Fisher-Rao metric if the corresponding ellipses are close together in the image. The Fisher-Rao metric is accurately approximated by a simpler metric under the assumption that the blurring is small compared with the sizes of the ellipses under consideration. It is shown that the parameter space for the ellipses in the image has a finite volume under the approximation to the Fisher-Rao metric. As a consequence the parameter space can be replaced, for the purpose of ellipse detection, by a finite set of points sampled from it. An efficient algorithm for sampling the parameter space is described. The algorithm uses the fact that the approximating metric is flat, and therefore locally Euclidean, on each three dimensional family of ellipses with a fixed orientation and a fixed eccentricity. Once the sample points have been obtained, ellipses are detected in a given image by checking each sample point in turn to see if the corresponding ellipse is supported by the nearby image pixel values. The resulting algorithm for ellipse detection is implemented. A multiresolution version of the algorithm is also implemented. The experimental results suggest that ellipses can be reliably detected in a given low resolution image and that the number of false detections
can be reduced using the multiresolution algorithm
Multi-View Face Recognition From Single RGBD Models of the Faces
This work takes important steps towards solving the following problem of current interest: Assuming that each individual in a population can be modeled by a single frontal RGBD face image, is it possible to carry out face recognition for such a population using multiple 2D images captured from arbitrary viewpoints? Although the general problem as stated above is extremely challenging, it encompasses subproblems that can be addressed today. The subproblems addressed in this work relate to: (1) Generating a large set of viewpoint dependent face images from a single RGBD frontal image for each individual; (2) using hierarchical approaches based on view-partitioned subspaces to represent the training data; and (3) based on these hierarchical approaches, using a weighted voting algorithm to integrate the evidence collected from multiple images of the same face as recorded from different viewpoints. We evaluate our methods on three datasets: a dataset of 10 people that we created and two publicly available datasets which include a total of 48 people. In addition to providing important insights into the nature of this problem, our results show that we are able to successfully recognize faces with accuracies of 95% or higher, outperforming existing state-of-the-art face recognition approaches based on deep convolutional neural networks
Bags of Affine Subspaces for Robust Object Tracking
We propose an adaptive tracking algorithm where the object is modelled as a
continuously updated bag of affine subspaces, with each subspace constructed
from the object's appearance over several consecutive frames. In contrast to
linear subspaces, affine subspaces explicitly model the origin of subspaces.
Furthermore, instead of using a brittle point-to-subspace distance during the
search for the object in a new frame, we propose to use a subspace-to-subspace
distance by representing candidate image areas also as affine subspaces.
Distances between subspaces are then obtained by exploiting the non-Euclidean
geometry of Grassmann manifolds. Experiments on challenging videos (containing
object occlusions, deformations, as well as variations in pose and
illumination) indicate that the proposed method achieves higher tracking
accuracy than several recent discriminative trackers.Comment: in International Conference on Digital Image Computing: Techniques
and Applications, 201
Multi-GCN: Graph Convolutional Networks for Multi-View Networks, with Applications to Global Poverty
With the rapid expansion of mobile phone networks in developing countries,
large-scale graph machine learning has gained sudden relevance in the study of
global poverty. Recent applications range from humanitarian response and
poverty estimation to urban planning and epidemic containment. Yet the vast
majority of computational tools and algorithms used in these applications do
not account for the multi-view nature of social networks: people are related in
myriad ways, but most graph learning models treat relations as binary. In this
paper, we develop a graph-based convolutional network for learning on
multi-view networks. We show that this method outperforms state-of-the-art
semi-supervised learning algorithms on three different prediction tasks using
mobile phone datasets from three different developing countries. We also show
that, while designed specifically for use in poverty research, the algorithm
also outperforms existing benchmarks on a broader set of learning tasks on
multi-view networks, including node labelling in citation networks
Extrinsic Methods for Coding and Dictionary Learning on Grassmann Manifolds
Sparsity-based representations have recently led to notable results in
various visual recognition tasks. In a separate line of research, Riemannian
manifolds have been shown useful for dealing with features and models that do
not lie in Euclidean spaces. With the aim of building a bridge between the two
realms, we address the problem of sparse coding and dictionary learning over
the space of linear subspaces, which form Riemannian structures known as
Grassmann manifolds. To this end, we propose to embed Grassmann manifolds into
the space of symmetric matrices by an isometric mapping. This in turn enables
us to extend two sparse coding schemes to Grassmann manifolds. Furthermore, we
propose closed-form solutions for learning a Grassmann dictionary, atom by
atom. Lastly, to handle non-linearity in data, we extend the proposed Grassmann
sparse coding and dictionary learning algorithms through embedding into Hilbert
spaces.
Experiments on several classification tasks (gender recognition, gesture
classification, scene analysis, face recognition, action recognition and
dynamic texture classification) show that the proposed approaches achieve
considerable improvements in discrimination accuracy, in comparison to
state-of-the-art methods such as kernelized Affine Hull Method and
graph-embedding Grassmann discriminant analysis.Comment: Appearing in International Journal of Computer Visio
- …