349 research outputs found

    A Simple Algorithm for Approximating the Text-To-Pattern Hamming Distance

    Get PDF
    The algorithmic task of computing the Hamming distance between a given pattern of length m and each location in a text of length n, both over a general alphabet Sigma, is one of the most fundamental algorithmic tasks in string algorithms. The fastest known runtime for exact computation is tilde O(nsqrt m). We recently introduced a complicated randomized algorithm for obtaining a (1 +/- eps) approximation for each location in the text in O( (n/eps) log(1/eps) log n log m log |Sigma|) total time, breaking a barrier that stood for 22 years. In this paper, we introduce an elementary and simple randomized algorithm that takes O((n/eps) log n log m) time

    Prioritized Metric Structures and Embedding

    Full text link
    Metric data structures (distance oracles, distance labeling schemes, routing schemes) and low-distortion embeddings provide a powerful algorithmic methodology, which has been successfully applied for approximation algorithms \cite{llr}, online algorithms \cite{BBMN11}, distributed algorithms \cite{KKMPT12} and for computing sparsifiers \cite{ST04}. However, this methodology appears to have a limitation: the worst-case performance inherently depends on the cardinality of the metric, and one could not specify in advance which vertices/points should enjoy a better service (i.e., stretch/distortion, label size/dimension) than that given by the worst-case guarantee. In this paper we alleviate this limitation by devising a suit of {\em prioritized} metric data structures and embeddings. We show that given a priority ranking (x1,x2,…,xn)(x_1,x_2,\ldots,x_n) of the graph vertices (respectively, metric points) one can devise a metric data structure (respectively, embedding) in which the stretch (resp., distortion) incurred by any pair containing a vertex xjx_j will depend on the rank jj of the vertex. We also show that other important parameters, such as the label size and (in some sense) the dimension, may depend only on jj. In some of our metric data structures (resp., embeddings) we achieve both prioritized stretch (resp., distortion) and label size (resp., dimension) {\em simultaneously}. The worst-case performance of our metric data structures and embeddings is typically asymptotically no worse than of their non-prioritized counterparts.Comment: To appear at STOC 201

    Faster Deterministic Modular Subset Sum

    Get PDF
    We consider the Modular Subset Sum problem: given a multiset X of integers from ?_m and a target integer t, decide if there exists a subset of X with a sum equal to t (mod m). Recent independent works by Cardinal and Iacono (SOSA\u2721), and Axiotis et al. (SOSA\u2721) provided simple and near-linear algorithms for this problem. Cardinal and Iacono gave a randomized algorithm that runs in ?(m log m) time, while Axiotis et al. gave a deterministic algorithm that runs in ?(m polylog m) time. Both results work by reduction to a text problem, which is solved using a dynamic strings data structure. In this work, we develop a simple data structure, designed specifically to handle the text problem that arises in the algorithms for Modular Subset Sum. Our data structure, which we call the shift-tree, is a simple variant of a segment tree. We provide both a hashing-based and a deterministic variant of the shift-trees. We then apply our data structure to the Modular Subset Sum problem and obtain two algorithms. The first algorithm is Monte-Carlo randomized and matches the ?(m log m) runtime of the Las-Vegas algorithm by Cardinal and Iacono. The second algorithm is fully deterministic and runs in ?(m log m ? ?(m)) time, where ? is the inverse Ackermann function

    Learning Reserve Prices in Second-Price Auctions

    Get PDF
    This paper proves the tight sample complexity of Second-Price Auction with Anonymous Reserve, up to a logarithmic factor, for all value distribution families that have been considered in the literature. Compared to Myerson Auction, whose sample complexity was settled very recently in (Guo, Huang and Zhang, STOC 2019), Anonymous Reserve requires much fewer samples for learning. We follow a similar framework as the Guo-Huang-Zhang work, but replace their information theoretical argument with a direct proof
    • …
    corecore