36,706 research outputs found

    Geodesic continued fractions and LLL

    Full text link
    We discuss a proposal for a continued fraction-like algorithm to determine simultaneous rational approximations to dd real numbers α1,,αd\alpha_1,\ldots,\alpha_d. It combines an algorithm of Hermite and Lagarias with ideas from LLL-reduction. We dynamically LLL-reduce a quadratic form with parameter tt as t0t\downarrow0. The new idea in this paper is that checking the LLL-conditions consists of solving linear equations in tt

    PotLLL: A Polynomial Time Version of LLL With Deep Insertions

    Full text link
    Lattice reduction algorithms have numerous applications in number theory, algebra, as well as in cryptanalysis. The most famous algorithm for lattice reduction is the LLL algorithm. In polynomial time it computes a reduced basis with provable output quality. One early improvement of the LLL algorithm was LLL with deep insertions (DeepLLL). The output of this version of LLL has higher quality in practice but the running time seems to explode. Weaker variants of DeepLLL, where the insertions are restricted to blocks, behave nicely in practice concerning the running time. However no proof of polynomial running time is known. In this paper PotLLL, a new variant of DeepLLL with provably polynomial running time, is presented. We compare the practical behavior of the new algorithm to classical LLL, BKZ as well as blockwise variants of DeepLLL regarding both the output quality and running time.Comment: 17 pages, 8 figures; extended version of arXiv:1212.5100 [cs.CR

    The Complexity of Distributed Edge Coloring with Small Palettes

    Full text link
    The complexity of distributed edge coloring depends heavily on the palette size as a function of the maximum degree Δ\Delta. In this paper we explore the complexity of edge coloring in the LOCAL model in different palette size regimes. 1. We simplify the \emph{round elimination} technique of Brandt et al. and prove that (2Δ2)(2\Delta-2)-edge coloring requires Ω(logΔlogn)\Omega(\log_\Delta \log n) time w.h.p. and Ω(logΔn)\Omega(\log_\Delta n) time deterministically, even on trees. The simplified technique is based on two ideas: the notion of an irregular running time and some general observations that transform weak lower bounds into stronger ones. 2. We give a randomized edge coloring algorithm that can use palette sizes as small as Δ+O~(Δ)\Delta + \tilde{O}(\sqrt{\Delta}), which is a natural barrier for randomized approaches. The running time of the algorithm is at most O(logΔTLLL)O(\log\Delta \cdot T_{LLL}), where TLLLT_{LLL} is the complexity of a permissive version of the constructive Lovasz local lemma. 3. We develop a new distributed Lovasz local lemma algorithm for tree-structured dependency graphs, which leads to a (1+ϵ)Δ(1+\epsilon)\Delta-edge coloring algorithm for trees running in O(loglogn)O(\log\log n) time. This algorithm arises from two new results: a deterministic O(logn)O(\log n)-time LLL algorithm for tree-structured instances, and a randomized O(loglogn)O(\log\log n)-time graph shattering method for breaking the dependency graph into independent O(logn)O(\log n)-size LLL instances. 4. A natural approach to computing (Δ+1)(\Delta+1)-edge colorings (Vizing's theorem) is to extend partial colorings by iteratively re-coloring parts of the graph. We prove that this approach may be viable, but in the worst case requires recoloring subgraphs of diameter Ω(Δlogn)\Omega(\Delta\log n). This stands in contrast to distributed algorithms for Brooks' theorem, which exploit the existence of O(logΔn)O(\log_\Delta n)-length augmenting paths

    Parallel algorithms and concentration bounds for the Lovasz Local Lemma via witness DAGs

    Full text link
    The Lov\'{a}sz Local Lemma (LLL) is a cornerstone principle in the probabilistic method of combinatorics, and a seminal algorithm of Moser & Tardos (2010) provides an efficient randomized algorithm to implement it. This can be parallelized to give an algorithm that uses polynomially many processors and runs in O(log3n)O(\log^3 n) time on an EREW PRAM, stemming from O(logn)O(\log n) adaptive computations of a maximal independent set (MIS). Chung et al. (2014) developed faster local and parallel algorithms, potentially running in time O(log2n)O(\log^2 n), but these algorithms require more stringent conditions than the LLL. We give a new parallel algorithm that works under essentially the same conditions as the original algorithm of Moser & Tardos but uses only a single MIS computation, thus running in O(log2n)O(\log^2 n) time on an EREW PRAM. This can be derandomized to give an NC algorithm running in time O(log2n)O(\log^2 n) as well, speeding up a previous NC LLL algorithm of Chandrasekaran et al. (2013). We also provide improved and tighter bounds on the run-times of the sequential and parallel resampling-based algorithms originally developed by Moser & Tardos. These apply to any problem instance in which the tighter Shearer LLL criterion is satisfied
    corecore