198 research outputs found

    Why walking the dog takes time: Frechet distance has no strongly subquadratic algorithms unless SETH fails

    Full text link
    The Frechet distance is a well-studied and very popular measure of similarity of two curves. Many variants and extensions have been studied since Alt and Godau introduced this measure to computational geometry in 1991. Their original algorithm to compute the Frechet distance of two polygonal curves with n vertices has a runtime of O(n^2 log n). More than 20 years later, the state of the art algorithms for most variants still take time more than O(n^2 / log n), but no matching lower bounds are known, not even under reasonable complexity theoretic assumptions. To obtain a conditional lower bound, in this paper we assume the Strong Exponential Time Hypothesis or, more precisely, that there is no O*((2-delta)^N) algorithm for CNF-SAT for any delta > 0. Under this assumption we show that the Frechet distance cannot be computed in strongly subquadratic time, i.e., in time O(n^{2-delta}) for any delta > 0. This means that finding faster algorithms for the Frechet distance is as hard as finding faster CNF-SAT algorithms, and the existence of a strongly subquadratic algorithm can be considered unlikely. Our result holds for both the continuous and the discrete Frechet distance. We extend the main result in various directions. Based on the same assumption we (1) show non-existence of a strongly subquadratic 1.001-approximation, (2) present tight lower bounds in case the numbers of vertices of the two curves are imbalanced, and (3) examine realistic input assumptions (c-packed curves)

    Improved approximation for Fr\'echet distance on c-packed curves matching conditional lower bounds

    Full text link
    The Fr\'echet distance is a well-studied and very popular measure of similarity of two curves. The best known algorithms have quadratic time complexity, which has recently been shown to be optimal assuming the Strong Exponential Time Hypothesis (SETH) [Bringmann FOCS'14]. To overcome the worst-case quadratic time barrier, restricted classes of curves have been studied that attempt to capture realistic input curves. The most popular such class are c-packed curves, for which the Fr\'echet distance has a (1+ϵ)(1+\epsilon)-approximation in time O~(cn/ϵ)\tilde{O}(c n /\epsilon) [Driemel et al. DCG'12]. In dimension d5d \ge 5 this cannot be improved to O((cn/ϵ)1δ)O((cn/\sqrt{\epsilon})^{1-\delta}) for any δ>0\delta > 0 unless SETH fails [Bringmann FOCS'14]. In this paper, exploiting properties that prevent stronger lower bounds, we present an improved algorithm with runtime O~(cn/ϵ)\tilde{O}(cn/\sqrt{\epsilon}). This is optimal in high dimensions apart from lower order factors unless SETH fails. Our main new ingredients are as follows: For filling the classical free-space diagram we project short subcurves onto a line, which yields one-dimensional separated curves with roughly the same pairwise distances between vertices. Then we tackle this special case in near-linear time by carefully extending a greedy algorithm for the Fr\'echet distance of one-dimensional separated curves

    A Note on Hardness of Diameter Approximation

    Full text link
    We revisit the hardness of approximating the diameter of a network. In the CONGEST model of distributed computing, Ω~(n) \tilde \Omega (n) rounds are necessary to compute the diameter [Frischknecht et al. SODA'12], where Ω~() \tilde \Omega (\cdot) hides polylogarithmic factors. Abboud et al. [DISC 2016] extended this result to sparse graphs and, at a more fine-grained level, showed that, for any integer 1polylog(n) 1 \leq \ell \leq \operatorname{polylog} (n) , distinguishing between networks of diameter 4+2 4 \ell + 2 and 6+1 6 \ell + 1 requires Ω~(n) \tilde \Omega (n) rounds. We slightly tighten this result by showing that even distinguishing between diameter 2+1 2 \ell + 1 and 3+1 3 \ell + 1 requires Ω~(n) \tilde \Omega (n) rounds. The reduction of Abboud et al. is inspired by recent conditional lower bounds in the RAM model, where the orthogonal vectors problem plays a pivotal role. In our new lower bound, we make the connection to orthogonal vectors explicit, leading to a conceptually more streamlined exposition.Comment: Accepted to Information Processing Letter

    False theta functions and companions to Capparelli's identities

    Full text link
    Capparelli conjectured two modular identities for partitions whose parts satisfy certain gap conditions, where were motivated by the calculation of characters for the standard modules of certain affine Lie algebras and by vertex operator theory. These identities were subsequently proved and refined by Andrews, who related them to Jacobi theta functions, and also by Alladi-Andrews-Gordon, Capparelli, and Tamba-Xie. In this paper we prove two new companions to Capparelli's identities, where the evaluations are expressed in terms of Jacobi theta functions and false theta functions.Comment: 17 pages; references update

    Multivariate Fine-Grained Complexity of Longest Common Subsequence

    Full text link
    We revisit the classic combinatorial pattern matching problem of finding a longest common subsequence (LCS). For strings xx and yy of length nn, a textbook algorithm solves LCS in time O(n2)O(n^2), but although much effort has been spent, no O(n2ε)O(n^{2-\varepsilon})-time algorithm is known. Recent work indeed shows that such an algorithm would refute the Strong Exponential Time Hypothesis (SETH) [Abboud, Backurs, Vassilevska Williams + Bringmann, K\"unnemann FOCS'15]. Despite the quadratic-time barrier, for over 40 years an enduring scientific interest continued to produce fast algorithms for LCS and its variations. Particular attention was put into identifying and exploiting input parameters that yield strongly subquadratic time algorithms for special cases of interest, e.g., differential file comparison. This line of research was successfully pursued until 1990, at which time significant improvements came to a halt. In this paper, using the lens of fine-grained complexity, our goal is to (1) justify the lack of further improvements and (2) determine whether some special cases of LCS admit faster algorithms than currently known. To this end, we provide a systematic study of the multivariate complexity of LCS, taking into account all parameters previously discussed in the literature: the input size n:=max{x,y}n:=\max\{|x|,|y|\}, the length of the shorter string m:=min{x,y}m:=\min\{|x|,|y|\}, the length LL of an LCS of xx and yy, the numbers of deletions δ:=mL\delta := m-L and Δ:=nL\Delta := n-L, the alphabet size, as well as the numbers of matching pairs MM and dominant pairs dd. For any class of instances defined by fixing each parameter individually to a polynomial in terms of the input size, we prove a SETH-based lower bound matching one of three known algorithms. Specifically, we determine the optimal running time for LCS under SETH as (n+min{d,δΔ,δm})1±o(1)(n+\min\{d, \delta \Delta, \delta m\})^{1\pm o(1)}. [...]Comment: Presented at SODA'18. Full Version. 66 page

    Remarks on Category-Based Routing in Social Networks

    Full text link
    It is well known that individuals can route messages on short paths through social networks, given only simple information about the target and using only local knowledge about the topology. Sociologists conjecture that people find routes greedily by passing the message to an acquaintance that has more in common with the target than themselves, e.g. if a dentist in Saarbr\"ucken wants to send a message to a specific lawyer in Munich, he may forward it to someone who is a lawyer and/or lives in Munich. Modelling this setting, Eppstein et al. introduced the notion of category-based routing. The goal is to assign a set of categories to each node of a graph such that greedy routing is possible. By proving bounds on the number of categories a node has to be in we can argue about the plausibility of the underlying sociological model. In this paper we substantially improve the upper bounds introduced by Eppstein et al. and prove new lower bounds.Comment: 21 page

    Double series representations for Schur's partition function and related identities

    Full text link
    We prove new double summation hypergeometric qq-series representations for several families of partitions, including those that appear in the famous product identities of G\"ollnitz, Gordon, and Schur. We give several different proofs for our results, using bijective partitions mappings and modular diagrams, the theory of qq-difference equations and recurrences, and the theories of summation and transformation for qq-series. We also consider a general family of similar double series and highlight a number of other interesting special cases.Comment: 19 page
    corecore