12 research outputs found
Constellations and the unsupervised learning of graphs
In this paper, we propose a novel method for the unsupervised clustering of graphs in the context of the constellation approach to object recognition. Such method is an EM central clustering algorithm which builds prototypical graphs on the basis of fast matching with graph transformations. Our experiments, both with random graphs and in realistic situations (visual localization), show that our prototypes improve the set median graphs and also the prototypes derived from our previous incremental method. We also discuss how the method scales with a growing number of images
Graph matching using position coordinates and local features for image analysis
Encontrar las correspondencias entre dos imágenes es un problema crucial en el campo de la visión por ordenador i el reconocimiento de patrones. Es relevante para un amplio rango de propósitos des de aplicaciones de reconocimiento de objetos en las áreas de biometría, análisis de documentos i análisis de formas hasta aplicaciones relacionadas con la geometría desde múltiples puntos de vista tales cómo la recuperación de la pose, estructura desde el movimiento y localización y mapeo.
La mayoría de las técnicas existentes enfocan este problema o bien usando características locales en la imagen o bien usando métodos de registro de conjuntos de puntos (o bien una mezcla de ambos). En las primeras, un conjunto disperso de características es primeramente extraído de las imágenes y luego caracterizado en la forma de vectores descriptores usando evidencias locales de la imagen. Las características son asociadas según la similitud entre sus descriptores. En las segundas, los conjuntos de características son considerados cómo conjuntos de puntos los cuales son asociados usando técnicas de optimización no lineal. Estos son procedimientos iterativos que estiman los parámetros de correspondencia y de alineamiento en pasos alternados.
Los grafos son representaciones que contemplan relaciones binarias entre las características. Tener en cuenta relaciones binarias al problema de la correspondencia a menudo lleva al llamado problema del emparejamiento de grafos. Existe cierta cantidad de métodos en la literatura destinados a encontrar soluciones aproximadas a diferentes instancias del problema de emparejamiento de grafos, que en la mayoría de casos es del tipo "NP-hard".
El cuerpo de trabajo principal de esta tesis está dedicado a formular ambos problemas de asociación de características de imagen y registro de conjunto de puntos como instancias del problema de emparejamiento de grafos. En todos los casos proponemos algoritmos aproximados para solucionar estos problemas y nos comparamos con un número de métodos existentes pertenecientes a diferentes áreas como eliminadores de "outliers", métodos de registro de conjuntos de puntos y otros métodos de emparejamiento de grafos.
Los experimentos muestran que en la mayoría de casos los métodos propuestos superan al resto. En ocasiones los métodos propuestos o bien comparten el mejor rendimiento con algún método competidor o bien obtienen resultados ligeramente peores. En estos casos, los métodos propuestos normalmente presentan tiempos computacionales inferiores.Trobar les correspondències entre dues imatges és un problema crucial en el camp de la visió per ordinador i el reconeixement de patrons. És rellevant per un ampli ventall de propòsits des d’aplicacions de reconeixement d’objectes en les àrees de biometria, anàlisi de documents i anàlisi de formes fins aplicacions relacionades amb geometria des de múltiples punts de vista tals com recuperació de pose, estructura des del moviment i localització i mapeig.
La majoria de les tècniques existents enfoquen aquest problema o bé usant característiques locals a la imatge o bé usant mètodes de registre de conjunts de punts (o bé una mescla d’ambdós). En les primeres, un conjunt dispers de característiques és primerament extret de les imatges i després caracteritzat en la forma de vectors descriptors usant evidències locals de la imatge. Les característiques son associades segons la similitud entre els seus descriptors. En les segones, els conjunts de característiques son considerats com conjunts de punts els quals son associats usant tècniques d’optimització no lineal. Aquests son procediments iteratius que estimen els paràmetres de correspondència i d’alineament en passos alternats.
Els grafs son representacions que contemplen relacions binaries entre les característiques. Tenir en compte relacions binàries al problema de la correspondència sovint porta a l’anomenat problema de l’emparellament de grafs. Existeix certa quantitat de mètodes a la literatura destinats a trobar solucions aproximades a diferents instàncies del problema d’emparellament de grafs, el qual en la majoria de casos és del tipus “NP-hard”.
Una part del nostre treball està dedicat a investigar els beneficis de les mesures de ``bins'' creuats per a la comparació de característiques locals de les imatges.
La resta està dedicat a formular ambdós problemes d’associació de característiques d’imatge i registre de conjunt de punts com a instàncies del problema d’emparellament de grafs. En tots els casos proposem algoritmes aproximats per solucionar aquests problemes i ens comparem amb un nombre de mètodes existents pertanyents a diferents àrees com eliminadors d’“outliers”, mètodes de registre de conjunts de punts i altres mètodes d’emparellament de grafs.
Els experiments mostren que en la majoria de casos els mètodes proposats superen a la resta. En ocasions els mètodes proposats o bé comparteixen el millor rendiment amb algun mètode competidor o bé obtenen resultats lleugerament pitjors. En aquests casos, els mètodes proposats normalment presenten temps computacionals inferiors
Understanding Graph Data Through Deep Learning Lens
Deep neural network models have established themselves as an unparalleled force in the domains
of vision, speech and text processing applications in recent years. However, graphs have formed a
significant component of data analytics including applications in Internet of Things, social networks,
pharmaceuticals and bioinformatics. An important characteristic of these deep learning techniques
is their ability to learn the important features which are necessary to excel at a given task, unlike
traditional machine learning algorithms which are dependent on handcrafted features. However,
there have been comparatively fewer e�orts in deep learning to directly work on graph inputs.
Various real-world problems can be easily solved by posing them as a graph analysis problem.
Considering the direct impact of the success of graph analysis on business outcomes, importance of
studying these complex graph data has increased exponentially over the years.
In this thesis, we address three contributions towards understanding graph data: (i) The first
contribution seeks to find anomalies in graphs using graphical models; (ii) The second contribution
uses deep learning with spatio-temporal random walks to learn representations of graph trajectories
(paths) and shows great promise on standard graph datasets; and (iii) The third contribution seeks
to propose a novel deep neural network that implicitly models attention to allow for interpretation
of graph classification.
Graph similarity through entropic manifold alignment
In this paper we decouple the problem of measuring graph similarity into two sequential steps. The first step is the linearization of the quadratic assignment problem (QAP) in a low-dimensional space, given by the embedding trick. The second step is the evaluation of an information-theoretic distributional measure, which relies on deformable manifold alignment. The proposed measure is a normalized conditional entropy, which induces a positive definite kernel when symmetrized. We use bypass entropy estimation methods to compute an approximation of the normalized conditional entropy. Our approach, which is purely topological (i.e., it does not rely on node or edge attributes although it can potentially accommodate them as additional sources of information) is competitive with state-of-the-art graph matching algorithms as sources of correspondence-based graph similarity, but its complexity is linear instead of cubic (although the complexity of the similarity measure is quadratic). We also determine that the best embedding strategy for graph similarity is provided by commute time embedding, and we conjecture that this is related to its inversibility property, since the inverse of the embeddings obtained using our method can be used as a generative sampler of graph structure.The work of the first and third authors was supported by the projects TIN2012-32839 and TIN2015-69077-P of the Spanish Government. The work of the second author was supported by a Royal Society Wolfson Research Merit Award
Building models from multiple point sets with kernel density estimation
One of the fundamental problems in computer vision is point set registration. Point
set registration finds use in many important applications and in particular can be considered
one of the crucial stages involved in the reconstruction of models of physical
objects and environments from depth sensor data. The problem of globally aligning
multiple point sets, representing spatial shape measurements from varying sensor viewpoints,
into a common frame of reference is a complex task that is imperative due to
the large number of critical functions that accurate and reliable model reconstructions
contribute to.
In this thesis we focus on improving the quality and feasibility of model and environment
reconstruction through the enhancement of multi-view point set registration
techniques. The thesis makes the following contributions: First, we demonstrate that
employing kernel density estimation to reason about the unknown generating surfaces
that range sensors measure allows us to express measurement variability, uncertainty
and also to separate the problems of model design and viewpoint alignment optimisation.
Our surface estimates define novel view alignment objective functions that inform
the registration process. Our surfaces can be estimated from point clouds in a datadriven
fashion. Through experiments on a variety of datasets we demonstrate that we
have developed a novel and effective solution to the simultaneous multi-view registration
problem.
We then focus on constructing a distributed computation framework capable of solving
generic high-throughput computational problems. We present a novel task-farming
model that we call Semi-Synchronised Task Farming (SSTF), capable of modelling and
subsequently solving computationally distributable problems that benefit from both
independent and dependent distributed components and a level of communication between
process elements. We demonstrate that this framework is a novel schema for
parallel computer vision algorithms and evaluate the performance to establish computational
gains over serial implementations. We couple this framework with an accurate
computation-time prediction model to contribute a novel structure appropriate for
addressing expensive real-world algorithms with substantial parallel performance and
predictable time savings.
Finally, we focus on a timely instance of the multi-view registration problem: modern
range sensors provide large numbers of viewpoint samples that result in an abundance
of depth data information. The ability to utilise this abundance of depth data in a
feasible and principled fashion is of importance to many emerging application areas
making use of spatial information. We develop novel methodology for the registration
of depth measurements acquired from many viewpoints capturing physical object
surfaces. By defining registration and alignment quality metrics based on our density
estimation framework we construct an optimisation methodology that implicitly considers
all viewpoints simultaneously. We use a non-parametric data-driven approach
to consider varying object complexity and guide large view-set spatial transform optimisations.
By aligning large numbers of partial, arbitrary-pose views we evaluate this
strategy quantitatively on large view-set range sensor data where we find that we can
improve registration accuracy over existing methods and contribute increased registration
robustness to the magnitude of coarse seed alignment. This allows large-scale
registration on problem instances exhibiting varying object complexity with the added
advantage of massive parallel efficiency
Matching sets of features for efficient retrieval and recognition
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 145-153).In numerous domains it is useful to represent a single example by the collection of local features or parts that comprise it. In computer vision in particular, local image features are a powerful way to describe images of objects and scenes. Their stability under variable image conditions is critical for success in a wide range of recognition and retrieval applications. However, many conventional similarity measures and machine learning algorithms assume vector inputs. Comparing and learning from images represented by sets of local features is therefore challenging, since each set may vary in cardinality and its elements lack a meaningful ordering. In this thesis I present computationally efficient techniques to handle comparisons, learning, and indexing with examples represented by sets of features. The primary goal of this research is to design and demonstrate algorithms that can effectively accommodate this useful representation in a way that scales with both the representation size as well as the number of images available for indexing or learning. I introduce the pyramid match algorithm, which efficiently forms an implicit partial matching between two sets of feature vectors.(cont.) The matching has a linear time complexity, naturally forms a Mercer kernel, and is robust to clutter or outlier features, a critical advantage for handling images with variable backgrounds, occlusions, and viewpoint changes. I provide bounds on the expected error relative to the optimal partial matching. For very large databases, even extremely efficient pairwise comparisons may not offer adequately responsive query times. I show how to perform sub-linear time retrievals under the matching measure with randomized hashing techniques, even when input sets have varying numbers of features. My results are focused on several important vision tasks, including applications to content-based image retrieval, discriminative classification for object recognition, kernel regression, and unsupervised learning of categories. I show how the dramatic increase in performance enables accurate and flexible image comparisons to be made on large-scale data sets, and removes the need to artificially limit the number of local descriptions used per image when learning visual categories.by Kristen Lorraine Grauman.Ph.D
Robust and Optimal Methods for Geometric Sensor Data Alignment
Geometric sensor data alignment - the problem of finding the
rigid transformation that correctly aligns two sets of sensor
data without prior knowledge of how the data correspond - is a
fundamental task in computer vision and robotics. It is
inconvenient then that outliers and non-convexity are inherent to
the problem and present significant challenges for alignment
algorithms. Outliers are highly prevalent in sets of sensor data,
particularly when the sets overlap incompletely. Despite this,
many alignment objective functions are not robust to outliers,
leading to erroneous alignments. In addition, alignment problems
are highly non-convex, a property arising from the objective
function and the transformation. While finding a local optimum
may not be difficult, finding the global optimum is a hard
optimisation problem. These key challenges have not been fully
and jointly resolved in the existing literature, and so there is
a need for robust and optimal solutions to alignment problems.
Hence the objective of this thesis is to develop tractable
algorithms for geometric sensor data alignment that are robust to
outliers and not susceptible to spurious local optima.
This thesis makes several significant contributions to the
geometric alignment literature, founded on new insights into
robust alignment and the geometry of transformations. Firstly, a
novel discriminative sensor data representation is proposed that
has better viewpoint invariance than generative models and is
time and memory efficient without sacrificing model fidelity.
Secondly, a novel local optimisation algorithm is developed for
nD-nD geometric alignment under a robust distance measure. It
manifests a wider region of convergence and a greater robustness
to outliers and sampling artefacts than other local optimisation
algorithms. Thirdly, the first optimal solution for 3D-3D
geometric alignment with an inherently robust objective function
is proposed. It outperforms other geometric alignment algorithms
on challenging datasets due to its guaranteed optimality and
outlier robustness, and has an efficient parallel implementation.
Fourthly, the first optimal solution for 2D-3D geometric
alignment with an inherently robust objective function is
proposed. It outperforms existing approaches on challenging
datasets, reliably finding the global optimum, and has an
efficient parallel implementation. Finally, another optimal
solution is developed for 2D-3D geometric alignment, using a
robust surface alignment measure.
Ultimately, robust and optimal methods, such as those in this
thesis, are necessary to reliably find accurate solutions to
geometric sensor data alignment problems
Reconnaissance de visage robuste aux occultations
Face recognition is an important technology in computer vision, which often acts as an essential component in biometrics systems, HCI systems, access control systems, multimedia indexing applications, etc. Partial occlusion, which significantly changes the appearance of part of a face, cannot only cause large performance deterioration of face recognition, but also can cause severe security issues. In this thesis, we focus on the occlusion problem in automatic face recognition in non-controlled environments. Toward this goal, we propose a framework that consists of applying explicit occlusion analysis and processing to improve face recognition under different occlusion conditions. We demonstrate in this thesis that the proposed framework is more efficient than the methods based on non-explicit occlusion treatments from the literature. We identify two new types of facial occlusions, namely the sparse occlusion and dynamic occlusion. Solutions are presented to handle the identified occlusion problems in more advanced surveillance context. Recently, the emerging Kinect sensor has been successfully applied in many computer vision fields. We introduce this new sensor in the context of face recognition, particularly in presence of occlusions, and demonstrate its efficiency compared with traditional 2D cameras. Finally, we propose two approaches based on 2D and 3D to improve the baseline face recognition techniques. Improving the baseline methods can also have the positive impact on the recognition results when partial occlusion occurs.La reconnaissance faciale est une technologie importante en vision par ordinateur, avec un rôle central en biométrie, interface homme-machine, contrôle d’accès, indexation multimédia, etc. L’occultation partielle, qui change complétement l’apparence d’une partie du visage, ne provoque pas uniquement une dégradation des performances en reconnaissance faciale, mai peut aussi avoir des conséquences en termes de sécurité. Dans cette thèse, nous concentrons sur le problème des occultations en reconnaissance faciale en environnements non contrôlés. Nous proposons une séquence qui consiste à analyser de manière explicite les occultations et à fiabiliser la reconnaissance faciale soumises à diverses occultations. Nous montrons dans cette thèse que l’approche proposée est plus efficace que les méthodes de l’état de l’art opérant sans traitement explicite dédié aux occultations. Nous identifions deux nouveaux types d’occultations, à savoir éparses et dynamiques. Des solutions sont introduites pour gérer ces problèmes d’occultation nouvellement identifiés dans un contexte de vidéo surveillance avancé. Récemment, le nouveau capteur Kinect a été utilisé avec succès dans de nombreuses applications en vision par ordinateur. Nous introduisons ce nouveau capteur dans le contexte de la reconnaissance faciale, en particulier en présence d’occultations, et démontrons son efficacité par rapport aux caméras traditionnelles. Finalement, nous proposons deux approches basées 2D et 3D permettant d’améliorer les techniques de base en reconnaissance de visages. L’amélioration des méthodes de base peut alors générer un impact positif sur les résultats de reconnaissance en présence d’occultations