17,313 research outputs found
Using Canonical Forms for Isomorphism Reduction in Graph-based Model Checking
Graph isomorphism checking can be used in graph-based model checking to achieve symmetry reduction. Instead of one-to-one comparing the graph representations of states, canonical forms of state graphs can be computed. These canonical forms can be used to store and compare states. However, computing a canonical form for a graph is computationally expensive. Whether computing a canonical representation for states and reducing the state space is more efficient than using canonical hashcodes for states and comparing states one-to-one is not a priori clear. In this paper these approaches to isomorphism reduction are described and a preliminary comparison is presented for checking isomorphism of pairs of graphs. An existing algorithm that does not compute a canonical form performs better that tools that do for graphs that are used in graph-based model checking. Computing canonical forms seems to scale better for larger graphs
Polynomial tuning of multiparametric combinatorial samplers
Boltzmann samplers and the recursive method are prominent algorithmic
frameworks for the approximate-size and exact-size random generation of large
combinatorial structures, such as maps, tilings, RNA sequences or various
tree-like structures. In their multiparametric variants, these samplers allow
to control the profile of expected values corresponding to multiple
combinatorial parameters. One can control, for instance, the number of leaves,
profile of node degrees in trees or the number of certain subpatterns in
strings. However, such a flexible control requires an additional non-trivial
tuning procedure. In this paper, we propose an efficient polynomial-time, with
respect to the number of tuned parameters, tuning algorithm based on convex
optimisation techniques. Finally, we illustrate the efficiency of our approach
using several applications of rational, algebraic and P\'olya structures
including polyomino tilings with prescribed tile frequencies, planar trees with
a given specific node degree distribution, and weighted partitions.Comment: Extended abstract, accepted to ANALCO2018. 20 pages, 6 figures,
colours. Implementation and examples are available at [1]
https://github.com/maciej-bendkowski/boltzmann-brain [2]
https://github.com/maciej-bendkowski/multiparametric-combinatorial-sampler
Multiresolution hierarchy co-clustering for semantic segmentation in sequences with small variations
This paper presents a co-clustering technique that, given a collection of
images and their hierarchies, clusters nodes from these hierarchies to obtain a
coherent multiresolution representation of the image collection. We formalize
the co-clustering as a Quadratic Semi-Assignment Problem and solve it with a
linear programming relaxation approach that makes effective use of information
from hierarchies. Initially, we address the problem of generating an optimal,
coherent partition per image and, afterwards, we extend this method to a
multiresolution framework. Finally, we particularize this framework to an
iterative multiresolution video segmentation algorithm in sequences with small
variations. We evaluate the algorithm on the Video Occlusion/Object Boundary
Detection Dataset, showing that it produces state-of-the-art results in these
scenarios.Comment: International Conference on Computer Vision (ICCV) 201
Extracting Hierarchies of Search Tasks & Subtasks via a Bayesian Nonparametric Approach
A significant amount of search queries originate from some real world
information need or tasks. In order to improve the search experience of the end
users, it is important to have accurate representations of tasks. As a result,
significant amount of research has been devoted to extracting proper
representations of tasks in order to enable search systems to help users
complete their tasks, as well as providing the end user with better query
suggestions, for better recommendations, for satisfaction prediction, and for
improved personalization in terms of tasks. Most existing task extraction
methodologies focus on representing tasks as flat structures. However, tasks
often tend to have multiple subtasks associated with them and a more
naturalistic representation of tasks would be in terms of a hierarchy, where
each task can be composed of multiple (sub)tasks. To this end, we propose an
efficient Bayesian nonparametric model for extracting hierarchies of such tasks
\& subtasks. We evaluate our method based on real world query log data both
through quantitative and crowdsourced experiments and highlight the importance
of considering task/subtask hierarchies.Comment: 10 pages. Accepted at SIGIR 2017 as a full pape
An Efficient generic algorithm for the generation of unlabelled cycles
In this report we combine two recent generation algorithms to obtain a
new algorithm for the generation of unlabelled cycles. Sawada's
algorithm lists all k-ary unlabelled cycles with fixed
content, that is, the
number of occurences of each symbol is fixed and given a priori.
The other algorithm, by the authors, generates all
multisets of objects with given total size n from any admissible
unlabelled class A. By admissible
we mean that the class can be specificied using atomic classes,
disjoints unions, products, sequences, (multi)sets, etc.
The resulting algorithm, which is the main contribution of this paper,
generates all cycles of objects with given total size n from any
admissible class A. Given the
generic nature of the algorithm, it is suitable for inclusion in
combinatorial libraries and for rapid prototyping. The new algorithm
incurs constant amortized time per generated cycle, the constant only
depending in the class A to which the objects in the cycle belong.Postprint (published version
A tree-decomposed transfer matrix for computing exact Potts model partition functions for arbitrary graphs, with applications to planar graph colourings
Combining tree decomposition and transfer matrix techniques provides a very
general algorithm for computing exact partition functions of statistical models
defined on arbitrary graphs. The algorithm is particularly efficient in the
case of planar graphs. We illustrate it by computing the Potts model partition
functions and chromatic polynomials (the number of proper vertex colourings
using Q colours) for large samples of random planar graphs with up to N=100
vertices. In the latter case, our algorithm yields a sub-exponential average
running time of ~ exp(1.516 sqrt(N)), a substantial improvement over the
exponential running time ~ exp(0.245 N) provided by the hitherto best known
algorithm. We study the statistics of chromatic roots of random planar graphs
in some detail, comparing the findings with results for finite pieces of a
regular lattice.Comment: 5 pages, 3 figures. Version 2 has been substantially expanded.
Version 3 shows that the worst-case running time is sub-exponential in the
number of vertice
- …