501 research outputs found
Cover-Encodings of Fitness Landscapes
The traditional way of tackling discrete optimization problems is by using
local search on suitably defined cost or fitness landscapes. Such approaches
are however limited by the slowing down that occurs when the local minima that
are a feature of the typically rugged landscapes encountered arrest the
progress of the search process. Another way of tackling optimization problems
is by the use of heuristic approximations to estimate a global cost minimum.
Here we present a combination of these two approaches by using cover-encoding
maps which map processes from a larger search space to subsets of the original
search space. The key idea is to construct cover-encoding maps with the help of
suitable heuristics that single out near-optimal solutions and result in
landscapes on the larger search space that no longer exhibit trapping local
minima. We present cover-encoding maps for the problems of the traveling
salesman, number partitioning, maximum matching and maximum clique; the
practical feasibility of our method is demonstrated by simulations of adaptive
walks on the corresponding encoded landscapes which find the global minima for
these problems.Comment: 15 pages, 4 figure
Finding Near-Optimal Independent Sets at Scale
The independent set problem is NP-hard and particularly difficult to solve in
large sparse graphs. In this work, we develop an advanced evolutionary
algorithm, which incorporates kernelization techniques to compute large
independent sets in huge sparse networks. A recent exact algorithm has shown
that large networks can be solved exactly by employing a branch-and-reduce
technique that recursively kernelizes the graph and performs branching.
However, one major drawback of their algorithm is that, for huge graphs,
branching still can take exponential time. To avoid this problem, we
recursively choose vertices that are likely to be in a large independent set
(using an evolutionary approach), then further kernelize the graph. We show
that identifying and removing vertices likely to be in large independent sets
opens up the reduction space---which not only speeds up the computation of
large independent sets drastically, but also enables us to compute high-quality
independent sets on much larger instances than previously reported in the
literature.Comment: 17 pages, 1 figure, 8 tables. arXiv admin note: text overlap with
arXiv:1502.0168
Maximum Common Subgraph Isomorphism Algorithms
Maximum common subgraph (MCS) isomorphism algorithms play an important role in chemoinformatics by providing an effective mechanism for the alignment of pairs of chemical structures. This article discusses the various types of MCS that can be identified when two graphs are compared and reviews some of the algorithms that are available for this purpose, focusing on those that are, or may be, applicable to the matching of chemical graphs
The Maximum Common Subgraph Problem: A Parallel and Multi-Engine Approach
The maximum common subgraph of two graphs is the largest possible common subgraph,
i.e., the common subgraph with as many vertices as possible. Even if this problem is very challenging,
as it has been long proven NP-hard, its countless practical applications still motivates searching
for exact solutions. This work discusses the possibility to extend an existing, very effective
branch-and-bound procedure on parallel multi-core and many-core architectures. We analyze
a parallel multi-core implementation that exploits a divide-and-conquer approach based on a
thread pool, which does not deteriorate the original algorithmic efficiency and it minimizes
data structure repetitions. We also extend the original algorithm to parallel many-core GPU
architectures adopting the CUDA programming framework, and we show how to handle the heavily
workload-unbalance and the massive data dependency. Then, we suggest new heuristics to reorder
the adjacency matrix, to deal with “dead-ends”, and to randomize the search with automatic restarts.
These heuristics can achieve significant speed-ups on specific instances, even if they may not be competitive with the original strategy on average. Finally, we propose a portfolio approach, which integrates all the different local search algorithms as component tools; such portfolio, rather than choosing the best tool for a given instance up-front, takes the decision on-line. The proposed approach drastically limits memory bandwidth constraints and avoids other typical portfolio fragility as CPU and GPU versions often show a complementary efficiency and run on separated platforms. Experimental results support the claims and motivate further research to better exploit GPUs in embedded task-intensive and multi-engine parallel applications
Sharp transition towards shared vocabularies in multi-agent systems
What processes can explain how very large populations are able to converge on
the use of a particular word or grammatical construction without global
coordination? Answering this question helps to understand why new language
constructs usually propagate along an S-shaped curve with a rather sudden
transition towards global agreement. It also helps to analyze and design new
technologies that support or orchestrate self-organizing communication systems,
such as recent social tagging systems for the web. The article introduces and
studies a microscopic model of communicating autonomous agents performing
language games without any central control. We show that the system undergoes a
disorder/order transition, going trough a sharp symmetry breaking process to
reach a shared set of conventions. Before the transition, the system builds up
non-trivial scale-invariant correlations, for instance in the distribution of
competing synonyms, which display a Zipf-like law. These correlations make the
system ready for the transition towards shared conventions, which, observed on
the time-scale of collective behaviors, becomes sharper and sharper with system
size. This surprising result not only explains why human language can scale up
to very large populations but also suggests ways to optimize artificial
semiotic dynamics.Comment: 12 pages, 4 figure
GRASP/VND Optimization Algorithms for Hard Combinatorial Problems
Two hard combinatorial problems are addressed in this thesis. The first one is known as the ”Max CutClique”, a combinatorial problem introduced by P. Martins in 2012. Given a simple graph, the goal is to
find a clique C such that the number of links shared between C and its complement C
C is maximum.
In a first contribution, a GRASP/VND methodology is proposed to tackle the problem. In a second
one, the N P-Completeness of the problem is mathematically proved. Finally, a further generalization
with weighted links is formally presented with a mathematical programming formulation, and the
previous GRASP is adapted to the new problem.
The second problem under study is a celebrated optimization problem coming from network
reliability analysis. We assume a graph G with perfect nodes and imperfect links, that fail independently
with identical probability ρ ∈ [0,1]. The reliability RG(ρ), is the probability that the resulting subgraph
has some spanning tree. Given a number of nodes and links, p and q, the goal is to find the (p,q)-graph
that has the maximum reliability RG(ρ), uniformly in the compact set ρ ∈ [0,1]. In a first contribution,
we exploit properties shared by all uniformly most-reliable graphs such as maximum connectivity and
maximum Kirchhoff number, in order to build a novel GRASP/VND methodology. Our proposal finds
the globally optimum solution under small cases, and it returns novel candidates of uniformly
most-reliable graphs, such as Kantor-Mobius and Heawood graphs. We also offer a literature review, ¨
and a mathematical proof that the bipartite graph K4,4 is uniformly most-reliable.
Finally, an abstract mathematical model of Stochastic Binary Systems (SBS) is also studied. It is a
further generalization of network reliability models, where failures are modelled by a general logical
function. A geometrical approximation of a logical function is offered, as well as a novel method to find
reliability bounds for general SBS. This bounding method combines an algebraic duality, Markov
inequality and Hahn-Banach separation theorem between convex and compact sets
- …