298 research outputs found

    Routing schemes for hybrid communication networks

    Get PDF
    We consider the problem of computing routing schemes in the HYBRID model of distributed computing where nodes have access to two fundamentally different communication modes. In this problem nodes have to compute small labels and routing tables that allow for efficient routing of messages in the local network, which typically offers the majority of the throughput. Recent work has shown that using the HYBRID model admits a significant speed-up compared to what would be possible if either communication mode were used in isolation. Nonetheless, if general graphs are used as the input graph the computation of routing schemes still takes polynomial rounds in the HYBRID model. We bypass this lower bound by restricting the local graph to unit-disc-graphs and solve the problem deterministically with running time O(|H|2+log⁡n), label size O(log⁡n), and size of routing tables O(|H|2⋅log⁡n) where |H| is the number of “radio holes” in the network. Our work builds on recent work by Coy et al., who obtain this result in the much simpler setting where the input graph has no radio holes. We develop new techniques to achieve this, including a decomposition of the local graph into path-convex regions, where each region contains a shortest path for any pair of nodes in it

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    From a causal representation of multiloop scattering amplitudes to quantum computing in the Loop-Tree Duality

    Get PDF
    La teoría cúantica de campos con enfoque perturbativo ha logrado de manera exitosa proporcionar predicciones teóricas increíblemente precisas en física de altas energías. A pesar del desarrollo de diversas técnicas con el objetivo de incrementar la eficiencia de estos cálculos, algunos ingredientes continuan siendo un verdadero reto. Este es el caso de las amplitudes de dispersión con lazos múltiples, las cuales describen las fluctuaciones cuánticas en los procesos de dispersión a altas energías. La Dualidad Lazo-Árbol (LTD) es un método innovador, propuesto con el objetivo de afrontar estas dificultades abriendo las amplitudes de lazo a amplitudes conectadas de tipo árbol. En esta tesis presentamos tres logros fundamentales: la reformulación de la Dualidad Lazo-Árbol a todos los órdenes en la expansión perturbativa, una metodología general para obtener expresiones LTD con un comportamiento manifiestamente causal, y la primera aplicación de un algoritmo cuántico a integrales de lazo de Feynman. El cambio de estrategia propuesto para implementar la metodología LTD, consiste en la aplicación iterada del teorema del residuo de Cauchy a un conjunto de topologías con lazos m\'ultiples y configuraciones internas arbitrarias. La representación LTD que se obtiene, sigue una estructura factorizada en términos de subtopologías más simples, caracterizada por un comportamiento causal bien conocido. Además, a través de un proceso avanzado desarrollamos representaciones duales analíticas explícitamente libres de singularidades no causales. Estas propiedades permiten escribir cualquier amplitud de dispersión, hasta cinco lazos, de forma factorizada con una mejor estabilidad numérica en comparación con otras representaciones, debido a la ausencia de singularidades no causales. Por último, establecemos la conexión entre las integrales de lazo de Feynman y la computación cuántica, mediante la asociación de los dos estados sobre la capa de masas de un propagador de Feynman con los dos estados de un qubit. Proponemos una modificación del algoritmo cuántico de Grover para encontrar las configuraciones singulares causales de los diagramas de Feynman con lazos múltiples. Estas configuraciones son requeridas para establecer la representación causal de topologías con lazos múltiples.The perturbative approach to Quantum Field Theories has successfully provided incredibly accurate theoretical predictions in high-energy physics. Despite the development of several techniques to boost the efficiency of these calculations, some ingredients remain a hard bottleneck. This is the case of multiloop scattering amplitudes, describing the quantum fluctuations at high-energy scattering processes. The Loop-Tree Duality (LTD) is a novel method aimed to overcome these difficulties by opening the loop amplitudes into connected tree-level diagrams. In this thesis we present three core achievements: the reformulation of the Loop-Tree Duality to all orders in the perturbative expansion, a general methodology to obtain LTD expressions which are manifestly causal, and the first flagship application of a quantum algorithm to Feynman loop integrals. The proposed strategy to implement the LTD framework consists in the iterated application of the Cauchy's residue theorem to a series of mutiloop topologies with arbitrary internal configurations. We derive a LTD representation exhibiting a factorized cascade form in terms of simpler subtopologies characterized by a well-known causal behaviour. Moreover, through a clever approach we extract analytic dual representations that are explicitly free of noncausal singularities. These properties enable to open any scattering amplitude of up to five loops in a factorized form, with a better numerical stability than in other representations due to the absence of noncausal singularities. Last but not least, we establish the connection between Feynman loop integrals and quantum computing by encoding the two on-shell states of a Feynman propagator through the two states of a qubit. We propose a modified Grover's quantum algorithm to unfold the causal singular configurations of multiloop Feynman diagrams used to bootstrap the causal LTD representation of multiloop topologies

    A Local-to-Global Theorem for Congested Shortest Paths

    Full text link
    Amiri and Wargalla (2020) proved the following local-to-global theorem in directed acyclic graphs (DAGs): if GG is a weighted DAG such that for each subset SS of 3 nodes there is a shortest path containing every node in SS, then there exists a pair (s,t)(s,t) of nodes such that there is a shortest stst-path containing every node in GG. We extend this theorem to general graphs. For undirected graphs, we prove that the same theorem holds (up to a difference in the constant 3). For directed graphs, we provide a counterexample to the theorem (for any constant), and prove a roundtrip analogue of the theorem which shows there exists a pair (s,t)(s,t) of nodes such that every node in GG is contained in the union of a shortest stst-path and a shortest tsts-path. The original theorem for DAGs has an application to the kk-Shortest Paths with Congestion cc ((k,ck,c)-SPC) problem. In this problem, we are given a weighted graph GG, together with kk node pairs (s1,t1),,(sk,tk)(s_1,t_1),\dots,(s_k,t_k), and a positive integer ckc\leq k. We are tasked with finding paths P1,,PkP_1,\dots, P_k such that each PiP_i is a shortest path from sis_i to tit_i, and every node in the graph is on at most cc paths PiP_i, or reporting that no such collection of paths exists. When c=kc=k the problem is easily solved by finding shortest paths for each pair (si,ti)(s_i,t_i) independently. When c=1c=1, the (k,c)(k,c)-SPC problem recovers the kk-Disjoint Shortest Paths (kk-DSP) problem, where the collection of shortest paths must be node-disjoint. For fixed kk, kk-DSP can be solved in polynomial time on DAGs and undirected graphs. Previous work shows that the local-to-global theorem for DAGs implies that (k,c)(k,c)-SPC on DAGs whenever kck-c is constant. In the same way, our work implies that (k,c)(k,c)-SPC can be solved in polynomial time on undirected graphs whenever kck-c is constant.Comment: Updated to reflect reviewer comment

    Algorithms for Geometric Facility Location: Centers in a Polygon and Dispersion on a Line

    Get PDF
    We study three geometric facility location problems in this thesis. First, we consider the dispersion problem in one dimension. We are given an ordered list of (possibly overlapping) intervals on a line. We wish to choose exactly one point from each interval such that their left to right ordering on the line matches the input order. The aim is to choose the points so that the distance between the closest pair of points is maximized, i.e., they must be socially distanced while respecting the order. We give a new linear-time algorithm for this problem that produces a lexicographically optimal solution. We also consider some generalizations of this problem. For the next two problems, the domain of interest is a simple polygon with n vertices. The second problem concerns the visibility center. The convention is to think of a polygon as the top view of a building (or art gallery) where the polygon boundary represents opaque walls. Two points in the domain are visible to each other if the line segment joining them does not intersect the polygon exterior. The distance to visibility from a source point to a target point is the minimum geodesic distance from the source to a point in the polygon visible to the target. The question is: Where should a single guard be located within the polygon to minimize the maximum distance to visibility? For m point sites in the polygon, we give an O((m + n) log (m + n)) time algorithm to determine their visibility center. Finally, we address the problem of locating the geodesic edge center of a simple polygon—a point in the polygon that minimizes the maximum geodesic distance to any edge. For a triangle, this point coincides with its incenter. The geodesic edge center is a generalization of the well-studied geodesic center (a point that minimizes the maximum distance to any vertex). Center problems are closely related to farthest Voronoi diagrams, which are well- studied for point sites in the plane, and less well-studied for line segment sites in the plane. When the domain is a polygon rather than the whole plane, only the case of point sites has been addressed—surprisingly, more general sites (with line segments being the simplest example) have been largely ignored. En route to our solution, we revisit, correct, and generalize (sometimes in a non-trivial manner) existing algorithms and structures tailored to work specifically for point sites. We give an optimal linear-time algorithm for finding the geodesic edge center of a simple polygon

    Quadratic Speedups in Parallel Sampling from Determinantal Distributions

    Full text link
    We study the problem of parallelizing sampling from distributions related to determinants: symmetric, nonsymmetric, and partition-constrained determinantal point processes, as well as planar perfect matchings. For these distributions, the partition function, a.k.a. the count, can be obtained via matrix determinants, a highly parallelizable computation; Csanky proved it is in NC. However, parallel counting does not automatically translate to parallel sampling, as classic reductions between the two are inherently sequential. We show that a nearly quadratic parallel speedup over sequential sampling can be achieved for all the aforementioned distributions. If the distribution is supported on subsets of size kk of a ground set, we show how to approximately produce a sample in O~(k12+c)\widetilde{O}(k^{\frac{1}{2} + c}) time with polynomially many processors for any constant c>0c>0. In the two special cases of symmetric determinantal point processes and planar perfect matchings, our bound improves to O~(k)\widetilde{O}(\sqrt k) and we show how to sample exactly in these cases. As our main technical contribution, we fully characterize the limits of batching for the steps of sampling-to-counting reductions. We observe that only O(1)O(1) steps can be batched together if we strive for exact sampling, even in the case of nonsymmetric determinantal point processes. However, we show that for approximate sampling, Ω~(k12c)\widetilde{\Omega}(k^{\frac{1}{2}-c}) steps can be batched together, for any entropically independent distribution, which includes all mentioned classes of determinantal point processes. Entropic independence and related notions have been the source of breakthroughs in Markov chain analysis in recent years, so we expect our framework to prove useful for distributions beyond those studied in this work.Comment: 33 pages, SPAA 202

    Analysing trajectory similarity and improving graph dilation

    Get PDF
    In this thesis, we focus on two topics in computational geometry. The first topic is analysing trajectory similarity. A trajectory tracks the movement of an object over time. A common way to analyse trajectories is by finding similarities. The Fr\'echet distance is a similarity measure that has gained popularity in the theory community, since it takes the continuity of the curves into account. One way to analyse trajectories using the Fr\'echet distance is to cluster trajectories into groups of similar trajectories. For vehicle trajectories, another way to analyse trajectories is to compute the path on the underlying road network that best represents the trajectory. The second topic is improving graph dilation. Dilation measures the quality of a network in applications such as transportation and communication networks. Spanners are low dilation graphs with not too many edges. Most of the literature on spanners focuses on building the graph from scratch. We instead focus on adding edges to improve the dilation of an existing graph

    Stronger 3-SUM Lower Bounds for Approximate Distance Oracles via Additive Combinatorics

    Full text link
    The "short cycle removal" technique was recently introduced by Abboud, Bringmann, Khoury and Zamir (STOC '22) to prove fine-grained hardness of approximation. Its main technical result is that listing all triangles in an n1/2n^{1/2}-regular graph is n2o(1)n^{2-o(1)}-hard under the 3-SUM conjecture even when the number of short cycles is small; namely, when the number of kk-cycles is O(nk/2+γ)O(n^{k/2+\gamma}) for γ<1/2\gamma<1/2. Abboud et al. achieve γ1/4\gamma\geq 1/4 by applying structure vs. randomness arguments on graphs. In this paper, we take a step back and apply conceptually similar arguments on the numbers of the 3-SUM problem. Consequently, we achieve the best possible γ=0\gamma=0 and the following lower bounds under the 3-SUM conjecture: * Approximate distance oracles: The seminal Thorup-Zwick distance oracles achieve stretch 2k±O(1)2k\pm O(1) after preprocessing a graph in O(mn1/k)O(m n^{1/k}) time. For the same stretch, and assuming the query time is no(1)n^{o(1)} Abboud et al. proved an Ω(m1+112.7552k)\Omega(m^{1+\frac{1}{12.7552 \cdot k}}) lower bound on the preprocessing time; we improve it to Ω(m1+12k)\Omega(m^{1+\frac1{2k}}) which is only a factor 2 away from the upper bound. We also obtain tight bounds for stretch 2+o(1)2+o(1) and 3ϵ3-\epsilon and higher lower bounds for dynamic shortest paths. * Listing 4-cycles: Abboud et al. proved the first super-linear lower bound for listing all 4-cycles in a graph, ruling out (m1.1927+t)1+o(1)(m^{1.1927}+t)^{1+o(1)} time algorithms where tt is the number of 4-cycles. We settle the complexity of this basic problem by showing that the O~(min(m4/3,n2)+t)\widetilde{O}(\min(m^{4/3},n^2) +t) upper bound is tight up to no(1)n^{o(1)} factors. Our results exploit a rich tool set from additive combinatorics, most notably the Balog-Szemer\'edi-Gowers theorem and Rusza's covering lemma. A key ingredient that may be of independent interest is a subquadratic algorithm for 3-SUM if one of the sets has small doubling.Comment: Abstract shortened to fit arXiv requirement

    Algorithms for sparse convolution and sublinear edit distance

    Get PDF
    In this PhD thesis on fine-grained algorithm design and complexity, we investigate output-sensitive and sublinear-time algorithms for two important problems. (1) Sparse Convolution: Computing the convolution of two vectors is a basic algorithmic primitive with applications across all of Computer Science and Engineering. In the sparse convolution problem we assume that the input and output vectors have at most t nonzero entries, and the goal is to design algorithms with running times dependent on t. For the special case where all entries are nonnegative, which is particularly important for algorithm design, it is known since twenty years that sparse convolutions can be computed in near-linear randomized time O(t log^2 n). In this thesis we develop a randomized algorithm with running time O(t \log t) which is optimal (under some mild assumptions), and the first near-linear deterministic algorithm for sparse nonnegative convolution. We also present an application of these results, leading to seemingly unrelated fine-grained lower bounds against distance oracles in graphs. (2) Sublinear Edit Distance: The edit distance of two strings is a well-studied similarity measure with numerous applications in computational biology. While computing the edit distance exactly provably requires quadratic time, a long line of research has lead to a constant-factor approximation algorithm in almost-linear time. Perhaps surprisingly, it is also possible to approximate the edit distance k within a large factor O(k) in sublinear time O~(n/k + poly(k)). We drastically improve the approximation factor of the known sublinear algorithms from O(k) to k^{o(1)} while preserving the O(n/k + poly(k)) running time.In dieser Doktorarbeit über feinkörnige Algorithmen und Komplexität untersuchen wir ausgabesensitive Algorithmen und Algorithmen mit sublinearer Lauf-zeit für zwei wichtige Probleme. (1) Dünne Faltungen: Die Berechnung der Faltung zweier Vektoren ist ein grundlegendes algorithmisches Primitiv, das in allen Bereichen der Informatik und des Ingenieurwesens Anwendung findet. Für das dünne Faltungsproblem nehmen wir an, dass die Eingabe- und Ausgabevektoren höchstens t Einträge ungleich Null haben, und das Ziel ist, Algorithmen mit Laufzeiten in Abhängigkeit von t zu entwickeln. Für den speziellen Fall, dass alle Einträge nicht-negativ sind, was insbesondere für den Entwurf von Algorithmen relevant ist, ist seit zwanzig Jahren bekannt, dass dünn besetzte Faltungen in nahezu linearer randomisierter Zeit O(t \log^2 n) berechnet werden können. In dieser Arbeit entwickeln wir einen randomisierten Algorithmus mit Laufzeit O(t \log t), der (unter milden Annahmen) optimal ist, und den ersten nahezu linearen deterministischen Algorithmus für dünne nichtnegative Faltungen. Wir stellen auch eine Anwendung dieser Ergebnisse vor, die zu scheinbar unverwandten feinkörnigen unteren Schranken gegen Distanzorakel in Graphen führt. (2) Sublineare Editierdistanz: Die Editierdistanz zweier Zeichenketten ist ein gut untersuchtes Ähnlichkeitsmaß mit zahlreichen Anwendungen in der Computerbiologie. Während die exakte Berechnung der Editierdistanz nachweislich quadratische Zeit erfordert, hat eine lange Reihe von Forschungsarbeiten zu einem Approximationsalgorithmus mit konstantem Faktor in fast-linearer Zeit geführt. Überraschenderweise ist es auch möglich, die Editierdistanz k innerhalb eines großen Faktors O(k) in sublinearer Zeit O~(n/k + poly(k)) zu approximieren. Wir verbessern drastisch den Approximationsfaktor der bekannten sublinearen Algorithmen von O(k) auf k^{o(1)} unter Beibehaltung der O(n/k + poly(k))-Laufzeit
    corecore