1,844 research outputs found
Dynamic Connectivity in Disk Graphs
Let S ⊆ R2 be a set of n sites in the plane, so that every site s ∈ S has an associated
radius rs > 0. Let D(S) be the disk intersection graph defined by S, i.e., the graph
with vertex set S and an edge between two distinct sites s, t ∈ S if and only if the
disks with centers s, t and radii rs , rt intersect. Our goal is to design data structures
that maintain the connectivity structure of D(S) as sites are inserted and/or deleted
in S. First, we consider unit disk graphs, i.e., we fix rs = 1, for all sites s ∈ S.
For this case, we describe a data structure that has O(log2 n) amortized update time
and O(log n/ log log n) query time. Second, we look at disk graphs with bounded
radius ratio Ψ, i.e., for all s ∈ S, we have 1 ≤ rs ≤ Ψ, for a parameter Ψ that is
known in advance. Here, we not only investigate the fully dynamic case, but also the
incremental and the decremental scenario, where only insertions or only deletions of
sites are allowed. In the fully dynamic case, we achieve amortized expected update
time O(Ψ log4 n) and query time O(log n/ log log n). This improves the currently
best update time by a factor of Ψ. In the incremental case, we achieve logarithmic
dependency on Ψ, with a data structure that has O(α(n)) amortized query time and
O(log Ψ log4 n) amortized expected update time, where α(n) denotes the inverse Ackermann
function. For the decremental setting, we first develop an efficient decremental
disk revealing data structure: given two sets R and B of disks in the plane, we can delete
disks from B, and upon each deletion, we receive a list of all disks in R that no longer
intersect the union of B. Using this data structure, we get decremental data structures
with a query time of O(log n/ log log n) that supports deletions in O(n log Ψ log4 n)
overall expected time for disk graphs with bounded radius ratio Ψ and O(n log5 n)
overall expected time for disk graphs with arbitrary radii, assuming that the deletion
sequence is oblivious of the internal random choices of the data structures
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
(Almost) Ruling Out SETH Lower Bounds for All-Pairs Max-Flow
The All-Pairs Max-Flow problem has gained significant popularity in the last
two decades, and many results are known regarding its fine-grained complexity.
Despite this, wide gaps remain in our understanding of the time complexity for
several basic variants of the problem. In this paper, we aim to bridge this gap
by providing algorithms, conditional lower bounds, and non-reducibility
results. Our main result is that for most problem settings, deterministic
reductions based on the Strong Exponential Time Hypothesis (SETH) cannot rule
out time algorithms under a hypothesis called NSETH.
In particular, to obtain our result for the setting of undirected graphs with
unit node-capacities, we design a new randomized time
combinatorial algorithm, improving on the recent time
algorithm [Huang et al., STOC 2023] and matching their lower bound
(up to subpolynomial factors), thus essentially settling the time complexity
for this setting of the problem.
More generally, our main technical contribution is the insight that -cuts
can be verified quickly, and that in most settings, -flows can be shipped
succinctly (i.e., with respect to the flow support). This is a key idea in our
non-reducibility results, and it may be of independent interest
From coordinate subspaces over finite fields to ideal multipartite uniform clutters
Take a prime power , an integer , and a coordinate subspace
over the Galois field . One can associate with
an -partite -uniform clutter , where every part has size
and there is a bijection between the vectors in and the members of
.
In this paper, we determine when the clutter is ideal, a
property developed in connection to Packing and Covering problems in the areas
of Integer Programming and Combinatorial Optimization. Interestingly, the
characterization differs depending on whether is , a higher power of
, or otherwise. Each characterization uses crucially that idealness is a
minor-closed property: first the list of excluded minors is identified, and
only then is the global structure determined. A key insight is that idealness
of depends solely on the underlying matroid of .
Our theorems also extend from idealness to the stronger max-flow min-cut
property. As a consequence, we prove the Replication and Conjectures
for this class of clutters.Comment: 32 pages, 6 figure
"Le present est plein de l’avenir, et chargé du passé" : Vorträge des XI. Internationalen Leibniz-Kongresses, 31. Juli – 4. August 2023, Leibniz Universität Hannover, Deutschland. Band 2
[No abstract available]Deutschen Forschungsgemeinschaft (DFG)/Projektnr. 517991912VGH VersicherungNiedersächsisches Ministerium für Wissenschaft und Kultur (MWK
Improved Approximation Bounds for Minimum Weight Cycle in the CONGEST Model
Minimum Weight Cycle (MWC) is the problem of finding a simple cycle of
minimum weight in a graph . This is a fundamental graph problem with
classical sequential algorithms that run in and
time where and . In recent years this problem
has received significant attention in the context of hardness through fine
grained sequential complexity as well as in design of faster sequential
approximation algorithms.
For computing minimum weight cycle in the distributed CONGEST model,
near-linear in lower and upper bounds on round complexity are known for
directed graphs (weighted and unweighted), and for undirected weighted graphs;
these lower bounds also apply to any -approximation algorithm.
This paper focuses on round complexity bounds for approximating MWC in the
CONGEST model: For coarse approximations we show that for any constant , computing an -approximation of MWC requires rounds on weighted undirected graphs and on directed graphs, even
if unweighted. We complement these lower bounds with sublinear
-round algorithms for approximating MWC close to a factor
of 2 in these classes of graphs.
A key ingredient of our approximation algorithms is an efficient algorithm
for computing -approximate shortest paths from sources in
directed and weighted graphs, which may be of independent interest for other
CONGEST problems. We present an algorithm that runs in rounds if and rounds if , and this round
complexity smoothly interpolates between the best known upper bounds for
approximate (or exact) SSSP when and APSP when
Algorithms for Geometric Facility Location: Centers in a Polygon and Dispersion on a Line
We study three geometric facility location problems in this thesis.
First, we consider the dispersion problem in one dimension. We are given an ordered list
of (possibly overlapping) intervals on a line. We wish to choose exactly one point from
each interval such that their left to right ordering on the line matches the input order.
The aim is to choose the points so that the distance between the closest pair of points is
maximized, i.e., they must be socially distanced while respecting the order. We give a new
linear-time algorithm for this problem that produces a lexicographically optimal solution.
We also consider some generalizations of this problem.
For the next two problems, the domain of interest is a simple polygon with n vertices.
The second problem concerns the visibility center. The convention is to think of a polygon
as the top view of a building (or art gallery) where the polygon boundary represents opaque
walls. Two points in the domain are visible to each other if the line segment joining them
does not intersect the polygon exterior. The distance to visibility from a source point to a
target point is the minimum geodesic distance from the source to a point in the polygon
visible to the target. The question is: Where should a single guard be located within the
polygon to minimize the maximum distance to visibility? For m point sites in the polygon,
we give an O((m + n) log (m + n)) time algorithm to determine their visibility center.
Finally, we address the problem of locating the geodesic edge center of a simple polygon—a
point in the polygon that minimizes the maximum geodesic distance to any edge. For a
triangle, this point coincides with its incenter. The geodesic edge center is a generalization
of the well-studied geodesic center (a point that minimizes the maximum distance to any
vertex). Center problems are closely related to farthest Voronoi diagrams, which are well-
studied for point sites in the plane, and less well-studied for line segment sites in the plane.
When the domain is a polygon rather than the whole plane, only the case of point sites has
been addressed—surprisingly, more general sites (with line segments being the simplest
example) have been largely ignored. En route to our solution, we revisit, correct, and
generalize (sometimes in a non-trivial manner) existing algorithms and structures tailored
to work specifically for point sites. We give an optimal linear-time algorithm for finding
the geodesic edge center of a simple polygon
"Le present est plein de l’avenir, et chargé du passé" : Vorträge des XI. Internationalen Leibniz-Kongresses, 31. Juli – 4. August 2023, Leibniz Universität Hannover, Deutschland. Band 3
[No abstract available]Deutschen Forschungsgemeinschaft (DFG)/Projektnr. 517991912VGH VersicherungNiedersächsisches Ministerium für Wissenschaft und Kultur (MWK
Recommended from our members
Machine Learning Framework for Causal Modeling for Process Fault Diagnosis and Mechanistic Explanation Generation
Machine learning models, typically deep learning models, often come at the cost of explainability. To generate explanations of such systems, models need to be rooted in first-principles, at least mechanistically. In this work we look at a gamete of machine learning models based on different levels of process knowledge for process fault diagnosis and generating mechanistic explanations of processes. In chapter 1, we introduce the thesis using a range of problems from causality, explainability, aiming towards the goal of generating mechanistic explanations of process systems. Chapter 2 looks at an approach for generating causal models purely through data-centric approach, with minimal process knowledge with respect to equipment connectivity and identifying causality in the domains. These causal models generated can be utilized for process fault diagnosis.
Chapter 3 and chapter 4 show how deep learning models can be used for both classification for process fault diagnosis and regression. We see that depending on the hyperparameters, i.e., purely the breadth and depth of a neural network, the learned hidden representations vary from a simple set of features, to more complex sets of features. While these hidden representations may be exploited to aid in classification and regression problems, the true explanations of these representations do not correlate with mechanisms in the system of interest. There is thus a requirement to add more mechanistic information about the features generated to aid in explainability.
Chapter 5 shows how incorporating process knowledge can aid in generating such mechanistic explanations based on automated variable transformations. In this chapter we show how process knowledge can be used to generate features, or model forms to generate explainable models. These models have the ability of extracting the true models of the system from the model knowledge provided
Two-sets cut-uncut on planar graphs
We study the following Two-Sets Cut-Uncut problem on planar graphs. Therein,
one is given an undirected planar graph and two sets of vertices and
. The question is, what is the minimum number of edges to remove from ,
such that we separate all of from all of , while maintaining that every
vertex in , and respectively in , stays in the same connected component.
We show that this problem can be solved in time with a
one-sided error randomized algorithm. Our algorithm implies a polynomial-time
algorithm for the network diversion problem on planar graphs, which resolves an
open question from the literature. More generally, we show that Two-Sets
Cut-Uncut remains fixed-parameter tractable even when parameterized by the
number of faces in the plane graph covering the terminals , by
providing an algorithm of running time .Comment: 22 pages, 5 figure
- …