674 research outputs found

    Linear-Space Approximate Distance Oracles for Planar, Bounded-Genus, and Minor-Free Graphs

    Full text link
    A (1 + eps)-approximate distance oracle for a graph is a data structure that supports approximate point-to-point shortest-path-distance queries. The most relevant measures for a distance-oracle construction are: space, query time, and preprocessing time. There are strong distance-oracle constructions known for planar graphs (Thorup, JACM'04) and, subsequently, minor-excluded graphs (Abraham and Gavoille, PODC'06). However, these require Omega(eps^{-1} n lg n) space for n-node graphs. We argue that a very low space requirement is essential. Since modern computer architectures involve hierarchical memory (caches, primary memory, secondary memory), a high memory requirement in effect may greatly increase the actual running time. Moreover, we would like data structures that can be deployed on small mobile devices, such as handhelds, which have relatively small primary memory. In this paper, for planar graphs, bounded-genus graphs, and minor-excluded graphs we give distance-oracle constructions that require only O(n) space. The big O hides only a fixed constant, independent of \epsilon and independent of genus or size of an excluded minor. The preprocessing times for our distance oracle are also faster than those for the previously known constructions. For planar graphs, the preprocessing time is O(n lg^2 n). However, our constructions have slower query times. For planar graphs, the query time is O(eps^{-2} lg^2 n). For our linear-space results, we can in fact ensure, for any delta > 0, that the space required is only 1 + delta times the space required just to represent the graph itself

    Patching Colors with Tensors

    Get PDF

    African Americans (Research Report #121)

    Get PDF
    From the early 18th century to now, African-Americans have lived in Louisiana and the other Gulf states and played an integral role in shaping the linguistic and cultural traditions of the region. The seventh in the series discusses the experiences of African-Americans in the region.https://digitalcommons.lsu.edu/agcenter_researchreports/1006/thumbnail.jp

    Global Cardinality Constraints Make Approximating Some Max-2-CSPs Harder

    Get PDF
    Assuming the Unique Games Conjecture, we show that existing approximation algorithms for some Boolean Max-2-CSPs with cardinality constraints are optimal. In particular, we prove that Max-Cut with cardinality constraints is UG-hard to approximate within ~~0.858, and that Max-2-Sat with cardinality constraints is UG-hard to approximate within ~~0.929. In both cases, the previous best hardness results were the same as the hardness of the corresponding unconstrained Max-2-CSP (~~0.878 for Max-Cut, and ~~0.940 for Max-2-Sat). The hardness for Max-2-Sat applies to monotone Max-2-Sat instances, meaning that we also obtain tight inapproximability for the Max-k-Vertex-Cover problem

    “Como refugiados en su propio paĂ­s”: La formaciĂłn racial en Estados Unidos despuĂ©s del Katrina.

    Get PDF
    The controversy that followed hurricane Katrina and its representation by the media revealed unresolved racial issues in contemporary United States. Present-day New Orleans has become an ideal site for the application of Michael Omi and Howard Winant’s ‘racial formation’ theory, which challenges essentialist visions of race pointing to its sociohistorical construction. The present article makes use of this theoretical perspective to examine two pieces of fiction set in post-Katrina U.S.: HBO’s TV series Treme , and Richard Ford’s short-story “Leaving for Kenosha”. Such an analysis unveils key connections between race and class, ideology, politics or the role of the media. Keywords : Racial formation, Katrina, race, African Americans, mass media

    Hardness of Approximate Nearest Neighbor Search

    Full text link
    We prove conditional near-quadratic running time lower bounds for approximate Bichromatic Closest Pair with Euclidean, Manhattan, Hamming, or edit distance. Specifically, unless the Strong Exponential Time Hypothesis (SETH) is false, for every ÎŽ>0\delta>0 there exists a constant Ï”>0\epsilon>0 such that computing a (1+Ï”)(1+\epsilon)-approximation to the Bichromatic Closest Pair requires n2−ήn^{2-\delta} time. In particular, this implies a near-linear query time for Approximate Nearest Neighbor search with polynomial preprocessing time. Our reduction uses the Distributed PCP framework of [ARW'17], but obtains improved efficiency using Algebraic Geometry (AG) codes. Efficient PCPs from AG codes have been constructed in other settings before [BKKMS'16, BCGRS'17], but our construction is the first to yield new hardness results

    Biomedical Question Answering: A Survey of Approaches and Challenges

    Full text link
    Automatic Question Answering (QA) has been successfully applied in various domains such as search engines and chatbots. Biomedical QA (BQA), as an emerging QA task, enables innovative applications to effectively perceive, access and understand complex biomedical knowledge. There have been tremendous developments of BQA in the past two decades, which we classify into 5 distinctive approaches: classic, information retrieval, machine reading comprehension, knowledge base and question entailment approaches. In this survey, we introduce available datasets and representative methods of each BQA approach in detail. Despite the developments, BQA systems are still immature and rarely used in real-life settings. We identify and characterize several key challenges in BQA that might lead to this issue, and discuss some potential future directions to explore.Comment: In submission to ACM Computing Survey

    Random input helps searching predecessors

    Get PDF
    A data structure problem consists of the finite sets: D of data, Q of queries, A of query answers, associated with a function f: D x Q → A. The data structure of file X is "static" ("dynamic") if we "do not" ("do") require quick updates as X changes. An important goal is to compactly encode a file X Ï” D, such that for each query y Ï” Q, function f (X, y) requires the minimum time to compute an answer in A. This goal is trivial if the size of D is large, since for each query y Ï” Q, it was shown that f(X,y) requires O(1) time for the most important queries in the literature. Hence, this goal becomes interesting to study as a trade off between the "storage space" and the "query time", both measured as functions of the file size n = \X\. The ideal solution would be to use linear O(n) = O(\X\) space, while retaining a constant O(1) query time. However, if f (X, y) computes the static predecessor search (find largest x Ï” X: x ≀ y), then Ajtai [Ajt88] proved a negative result. By using just n0(1) = [IX]0(1) data space, then it is not possible to evaluate f(X,y) in O(1) time Ay Ï” Q. The proof exhibited a bad distribution of data D, such that Ey∗ Ï” Q (a "difficult" query y∗), that f(X,y∗) requires ω(1) time. Essentially [Ajt88] is an existential result, resolving the worst case scenario. But, [Ajt88] left open the question: do we typically, that is, with high probability (w.h.p.)1 encounter such "difficult" queries y Ï” Q, when assuming reasonable distributions with respect to (w.r.t.) queries and data? Below we make reasonable assumptions w.r.t. the distribution of the queries y Ï” Q, as well as w.r.t. the distribution of data X Ï” D. In two interesting scenarios studied in the literature, we resolve the typical (w.h.p.) query time
    • 

    corecore