6,484 research outputs found
Structural graph matching using the EM algorithm and singular value decomposition
This paper describes an efficient algorithm for inexact graph matching. The method is purely structural, that is, it uses only the edge or connectivity structure of the graph and does not draw on node or edge attributes. We make two contributions: 1) commencing from a probability distribution for matching errors, we show how the problem of graph matching can be posed as maximum-likelihood estimation using the apparatus of the EM algorithm; and 2) we cast the recovery of correspondence matches between the graph nodes in a matrix framework. This allows one to efficiently recover correspondence matches using the singular value decomposition. We experiment with the method on both real-world and synthetic data. Here, we demonstrate that the method offers comparable performance to more computationally demanding method
Structural matching by discrete relaxation
This paper describes a Bayesian framework for performing relational graph matching by discrete relaxation. Our basic aim is to draw on this framework to provide a comparative evaluation of a number of contrasting approaches to relational matching. Broadly speaking there are two main aspects to this study. Firstly we locus on the issue of how relational inexactness may be quantified. We illustrate that several popular relational distance measures can be recovered as specific limiting cases of the Bayesian consistency measure. The second aspect of our comparison concerns the way in which structural inexactness is controlled. We investigate three different realizations ai the matching process which draw on contrasting control models. The main conclusion of our study is that the active process of graph-editing outperforms the alternatives in terms of its ability to effectively control a large population of contaminating clutter
Efficient Subgraph Similarity Search on Large Probabilistic Graph Databases
Many studies have been conducted on seeking the efficient solution for
subgraph similarity search over certain (deterministic) graphs due to its wide
application in many fields, including bioinformatics, social network analysis,
and Resource Description Framework (RDF) data management. All these works
assume that the underlying data are certain. However, in reality, graphs are
often noisy and uncertain due to various factors, such as errors in data
extraction, inconsistencies in data integration, and privacy preserving
purposes. Therefore, in this paper, we study subgraph similarity search on
large probabilistic graph databases. Different from previous works assuming
that edges in an uncertain graph are independent of each other, we study the
uncertain graphs where edges' occurrences are correlated. We formally prove
that subgraph similarity search over probabilistic graphs is #P-complete, thus,
we employ a filter-and-verify framework to speed up the search. In the
filtering phase,we develop tight lower and upper bounds of subgraph similarity
probability based on a probabilistic matrix index, PMI. PMI is composed of
discriminative subgraph features associated with tight lower and upper bounds
of subgraph isomorphism probability. Based on PMI, we can sort out a large
number of probabilistic graphs and maximize the pruning capability. During the
verification phase, we develop an efficient sampling algorithm to validate the
remaining candidates. The efficiency of our proposed solutions has been
verified through extensive experiments.Comment: VLDB201
Explicit Reasoning over End-to-End Neural Architectures for Visual Question Answering
Many vision and language tasks require commonsense reasoning beyond
data-driven image and natural language processing. Here we adopt Visual
Question Answering (VQA) as an example task, where a system is expected to
answer a question in natural language about an image. Current state-of-the-art
systems attempted to solve the task using deep neural architectures and
achieved promising performance. However, the resulting systems are generally
opaque and they struggle in understanding questions for which extra knowledge
is required. In this paper, we present an explicit reasoning layer on top of a
set of penultimate neural network based systems. The reasoning layer enables
reasoning and answering questions where additional knowledge is required, and
at the same time provides an interpretable interface to the end users.
Specifically, the reasoning layer adopts a Probabilistic Soft Logic (PSL) based
engine to reason over a basket of inputs: visual relations, the semantic parse
of the question, and background ontological knowledge from word2vec and
ConceptNet. Experimental analysis of the answers and the key evidential
predicates generated on the VQA dataset validate our approach.Comment: 9 pages, 3 figures, AAAI 201
Stochastic Model Predictive Control for Autonomous Mobility on Demand
This paper presents a stochastic, model predictive control (MPC) algorithm
that leverages short-term probabilistic forecasts for dispatching and
rebalancing Autonomous Mobility-on-Demand systems (AMoD, i.e. fleets of
self-driving vehicles). We first present the core stochastic optimization
problem in terms of a time-expanded network flow model. Then, to ameliorate its
tractability, we present two key relaxations. First, we replace the original
stochastic problem with a Sample Average Approximation (SAA), and characterize
the performance guarantees. Second, we separate the controller into two
separate parts to address the task of assigning vehicles to the outstanding
customers separate from that of rebalancing. This enables the problem to be
solved as two totally unimodular linear programs, and thus easily scalable to
large problem sizes. Finally, we test the proposed algorithm in two scenarios
based on real data and show that it outperforms prior state-of-the-art
algorithms. In particular, in a simulation using customer data from DiDi
Chuxing, the algorithm presented here exhibits a 62.3 percent reduction in
customer waiting time compared to state of the art non-stochastic algorithms.Comment: Submitting to the IEEE International Conference on Intelligent
Transportation Systems 201
Image processing for plastic surgery planning
This thesis presents some image processing tools for plastic surgery planning. In particular,
it presents a novel method that combines local and global context in a probabilistic
relaxation framework to identify cephalometric landmarks used in Maxillofacial plastic
surgery. It also uses a method that utilises global and local symmetry to identify abnormalities
in CT frontal images of the human body. The proposed methodologies are
evaluated with the help of several clinical data supplied by collaborating plastic surgeons
Citywide Estimation of Traffic Dynamics via Sparse GPS Traces
Traffic congestion is a perpetual challenge in metropolitan areas around the world. The ability to understand traffic dynamics is thus critical to effective traffic control and management. However, estimation of traffic conditions over a large-scale road network has proven to be a challenging task for two reasons: first, traffic conditions are intrinsically stochastic; second, the availability and quality of traffic data vary to a great extent. Traditional traffic monitoring systems that exist mostly on major roads and highways are insufficient to recover the traffic conditions for an entire network. Recent advances in GPS technology and the resulting rich data sets offer new opportunities to improve upon such traditional means, by providing much broader coverage of road networks. Despite that, such data are limited by their spatial-temporal sparsity in practice. To address these issues, we have developed a novel framework to estimate travel times, traversed paths, and missing values over a large-scale road network using sparse GPS traces. Our method consists of two phases. In the first phase, we adopt the shortest travel time criterion based on Wardrop\u27s Principles in the map-matching process. With an improved traveltime allocation technique, we have achieved up to 52.5% relative error reduction in network travel times compared to a state-of-the-art method [1]. In the second phase, we estimate missing values using Compressed Sensing algorithm, thereby reducing the number of required measurements by 94.64%
- …