427 research outputs found
Geometric Inhomogeneous Random Graphs for Algorithm Engineering
The design and analysis of graph algorithms is heavily based on the worst case.
In practice, however, many algorithms perform much better than the worst case would suggest.
Furthermore, various problems can be tackled more efficiently if one assumes the input to be, in a sense, realistic.
The field of network science, which studies the structure and emergence of real-world networks, identifies locality and heterogeneity as two frequently occurring properties.
A popular model that captures these properties are geometric inhomogeneous random graphs (GIRGs), which is a generalization of hyperbolic random graphs (HRGs).
Aside from their importance to network science, GIRGs can be an immensely valuable tool in algorithm engineering.
Since they convincingly mimic real-world networks, guarantees about quality and performance of an algorithm on instances of the model can be transferred to real-world applications.
They have model parameters to control the amount of heterogeneity and locality, which allows to evaluate those properties in isolation while keeping the rest fixed.
Moreover, they can be efficiently generated which allows for experimental analysis.
While realistic instances are often rare, generated instances are readily available.
Furthermore, the underlying geometry of GIRGs helps to visualize the network, e.g.,~for debugging or to improve understanding of its structure.
The aim of this work is to demonstrate the capabilities of geometric inhomogeneous random graphs in algorithm engineering and establish them as routine tools to replace previous models like the Erd\H{o}s-R{\\u27e}nyi model, where each edge exists with equal probability.
We utilize geometric inhomogeneous random graphs to design, evaluate, and optimize efficient algorithms for realistic inputs.
In detail, we provide the currently fastest sequential generator for GIRGs and HRGs and describe algorithms for maximum flow, directed spanning arborescence, cluster editing, and hitting set.
For all four problems, our implementations beat the state-of-the-art on realistic inputs.
On top of providing crucial benchmark instances, GIRGs allow us to obtain valuable insights.
Most notably, our efficient generator allows us to
experimentally show sublinear running time of our flow algorithm,
investigate the solution structure of cluster editing,
complement our benchmark set of arborescence instances with a density for which there are no real-world networks available,
and generate networks with adjustable locality and heterogeneity to reveal the effects of these properties on our algorithms
Randomized Algorithms for Mixed Matching and Covering in Hypergraphs in 3D Seed Reconstruction in Brachytherapy
Brachytherapy is a method developed in the 1980s for cancer radiation in organs like prostate, lung, or breast. At the Clinic of Radiotherapy (radiooncology), Christian Albrechts University of Kiel, among others, low dose radiation therapy (LDR therapy) for the treatment of prostate cancer is applied, where 25-80 small radioactive seeds are implanted in the affected organ. For the quality control of the treatment plan the locations of the seeds after the operation have to be checked. This is done by taking usually 3 X-ray photographs from three different angles (so-called 3-film technique). On the films the seeds appear as white lines. To determine the positions of the seeds in the organ the task now is to match the 3 different images (lines) representing the same seed. In this paper we first model the seed reconstruction problem as a minimum-weight perfect matching problem in a hypergraph and based on this as an integer linear program. To solve this integer program, an algorithm based on the so-called randomized rounding scheme introduced by Raghavan and Thompson (1987) is designed and applied. This algorithm is not only very fast, but accessible at least in part for a mathematical rigorous analysis. We give a partial analysis of the algorithm combining probabilistic and combinatorial methods, which shows that in the worst-case the solution produced is in some strong sense close to a minimum-weight perfect matching
Parameterized Complexity Analysis of Randomized Search Heuristics
This chapter compiles a number of results that apply the theory of
parameterized algorithmics to the running-time analysis of randomized search
heuristics such as evolutionary algorithms. The parameterized approach
articulates the running time of algorithms solving combinatorial problems in
finer detail than traditional approaches from classical complexity theory. We
outline the main results and proof techniques for a collection of randomized
search heuristics tasked to solve NP-hard combinatorial optimization problems
such as finding a minimum vertex cover in a graph, finding a maximum leaf
spanning tree in a graph, and the traveling salesperson problem.Comment: This is a preliminary version of a chapter in the book "Theory of
Evolutionary Computation: Recent Developments in Discrete Optimization",
edited by Benjamin Doerr and Frank Neumann, published by Springe
The existence of designs via iterative absorption: hypergraph -designs for arbitrary
We solve the existence problem for -designs for arbitrary -uniform
hypergraphs~. This implies that given any -uniform hypergraph~, the
trivially necessary divisibility conditions are sufficient to guarantee a
decomposition of any sufficiently large complete -uniform hypergraph into
edge-disjoint copies of~, which answers a question asked e.g.~by Keevash.
The graph case was proved by Wilson in 1975 and forms one of the
cornerstones of design theory. The case when~ is complete corresponds to the
existence of block designs, a problem going back to the 19th century, which was
recently settled by Keevash. In particular, our argument provides a new proof
of the existence of block designs, based on iterative absorption (which employs
purely probabilistic and combinatorial methods).
Our main result concerns decompositions of hypergraphs whose clique
distribution fulfills certain regularity constraints. Our argument allows us to
employ a `regularity boosting' process which frequently enables us to satisfy
these constraints even if the clique distribution of the original hypergraph
does not satisfy them. This enables us to go significantly beyond the setting
of quasirandom hypergraphs considered by Keevash. In particular, we obtain a
resilience version and a decomposition result for hypergraphs of large minimum
degree.Comment: This version combines the two manuscripts `The existence of designs
via iterative absorption' (arXiv:1611.06827v1) and the subsequent `Hypergraph
F-designs for arbitrary F' (arXiv:1706.01800) into a single paper, which will
appear in the Memoirs of the AM
A Polyhedral Study of Mixed 0-1 Set
We consider a variant of the well-known single node fixed charge network flow set with constant capacities. This set arises from the relaxation of more general mixed integer sets such as lot-sizing problems with multiple suppliers. We provide a complete polyhedral characterization of the convex hull of the given set
Interlocking structure design and assembly
Many objects in our life are not manufactured as whole rigid pieces. Instead, smaller components are made to be later assembled into larger structures. Chairs are assembled from wooden pieces, cabins are made of logs, and buildings are constructed from bricks. These components are commonly designed by many iterations of human thinking. In this report, we will look at a few problems related to interlocking components design and assembly. Given an atomic object, how can we design a package that holds the object firmly without a gap in-between? How many pieces should the package be partitioned into? How can we assemble/extract each piece? We will attack this problem by first looking at the lower bound on the number of pieces, then at the upper bound. Afterwards, we will propose a practical algorithm for designing these packages. We also explore a special kind of interlocking structure which has only one or a small number of movable pieces. For example, a burr puzzle. We will design a few blocks with joints whose combination can be assembled into almost any voxelized 3D model. Our blocks require very simple motions to be assembled, enabling robotic assembly. As proof of concept, we also develop a robot system to assemble the blocks. In some extreme conditions where construction components are small, controlling each component individually is impossible. We will discuss an option using global controls. These global controls can be from gravity or magnetic fields. We show that in some special cases where the small units form a rectangular matrix, rearrangement can be done in a small space following a technique similar to bubble sort algorithm
Recommended from our members
Approximation of Multiobjective Optimization Problems
We study optimization problems with multiple objectives. Such problems are pervasive across many diverse disciplines -- in economics, engineering, healthcare, biology, to name but a few -- and heuristic approaches to solve them have already been deployed in several areas, in both academia and industry. Hence, there is a real need for a rigorous investigation of the relevant questions. In such problems we are interested not in a single optimal solution, but in the tradeoff between the different objectives. This is captured by the tradeoff or Pareto curve, the set of all feasible solutions whose vector of the various objectives is not dominated by any other solution. Typically, we have a small number of objectives and we wish to plot the tradeoff curve to get a sense of the design space. Unfortunately, typically the tradeoff curve has exponential size for discrete optimization problems even for two objectives (and is typically infinite for continuous problems). Hence, a natural goal in this setting is, given an instance of a multiobjective problem, to efficiently obtain a ``good'' approximation to the entire solution space with ``few'' solutions. This has been the underlying goal in much of the research in the multiobjective area, with many heuristics proposed for this purpose, typically however without any performance guarantees or complexity analysis. We develop efficient algorithms for the succinct approximation of the Pareto set for a large class of multiobjective problems. First, we investigate the problem of computing a minimum set of solutions that approximates within a specified accuracy the Pareto curve of a multiobjective optimization problem. We provide approximation algorithms with tight performance guarantees for bi-objective problems and make progress for the more challenging case of three and more objectives. Subsequently, we propose and study the notion of the approximate convex Pareto set; a novel notion of approximation to the Pareto set, as the appropriate one for the convex setting. We characterize when such an approximation can be efficiently constructed and investigate the problem of computing minimum size approximate convex Pareto sets, both for discrete and convex problems. Next, we turn to the problem of approximating the Pareto set as efficiently as possible. To this end, we analyze the Chord algorithm, a popular, simple method for the succinct approximation of curves, which is widely used, under different names, in a variety of areas, such as, multiobjective and parametric optimization, computational geometry, and graphics
- …