93 research outputs found

    Endre Szemerédi, Premi Abel 2012

    Get PDF
    Aquest article presenta una breu descripció de les contribucions matemàtiques més destacades d'Endre Szemerédi, Premi Abel 2012.This article presents a short description of the main mathematical contributions of Endre Szemerédi, Abel Prize 2012

    Smallest Compact Formulation for the Permutahedron

    Get PDF
    In this note, we consider the permutahedron, the convex hull of all permutations of {1,2…,n} . We show how to obtain an extended formulation for this polytope from any sorting network. By using the optimal Ajtai–Komlós–Szemerédi sorting network, this extended formulation has Θ(nlogn) variables and inequalities. Furthermore, from basic polyhedral arguments, we show that this is best possible (up to a multiplicative constant) since any extended formulation has at least Ω(nlogn) inequalities. The results easily extend to the generalized permutahedron.National Science Foundation (U.S.) (Contract CCF-0829878)National Science Foundation (U.S.) (Contract CCF-1115849)United States. Office of Naval Research (Grant 0014-05-1-0148

    Fragile Complexity of Comparison-Based Algorithms

    Get PDF
    We initiate a study of algorithms with a focus on the computational complexity of individual elements, and introduce the fragile complexity of comparison-based algorithms as the maximal number of comparisons any individual element takes part in. We give a number of upper and lower bounds on the fragile complexity for fundamental problems, including Minimum, Selection, Sorting and Heap Construction. The results include both deterministic and randomized upper and lower bounds, and demonstrate a separation between the two settings for a number of problems. The depth of a comparator network is a straight-forward upper bound on the worst case fragile complexity of the corresponding fragile algorithm. We prove that fragile complexity is a different and strictly easier property than the depth of comparator networks, in the sense that for some problems a fragile complexity equal to the best network depth can be achieved with less total work and that with randomization, even a lower fragile complexity is possible

    Sorting Short Integers

    Get PDF

    Fragile Complexity of Adaptive Algorithms

    Get PDF
    The fragile complexity of a comparison-based algorithm is f(n) if each input element participates in O(f(n)) comparisons. In this paper, we explore the fragile complexity of algorithms adaptive to various restrictions on the input, i.e. algorithms with a fragile complexity parameterized by a quantity other than the input size n. We show that searching for the predecessor in a sorted array has fragile complexity Θ(log k), where k is the rank of the query element, both in a randomized and a deterministic setting. For predecessor searches, we also show how to optimally reduce the amortized fragile complexity of the elements in the array. We also prove the following results: Selecting the kth smallest element has expected fragile complexity O(log log k) for the element selected. Deterministically finding the minimum element has fragile complexity Θ(log (Inv ) ) and Θ(log (Runs ) ), where Inv is the number of inversions in a sequence and Runs is the number of increasing runs in a sequence. Deterministically finding the median has fragile complexity O(log (Runs ) + log log n) and Θ(log (Inv ) ). Deterministic sorting has fragile complexity Θ(log (Inv ) ) but it has fragile complexity Θ(log n) regardless of the number of runs.SCOPUS: cp.kinfo:eu-repo/semantics/publishe

    Expander Construction in VNC1

    Get PDF
    We give a combinatorial analysis (using edge expansion) of a variant of the iterative expander construction due to Reingold, Vadhan, and Wigderson (2002), and show that this analysis can be formalized in the bounded arithmetic system VNC^1 (corresponding to the "NC^1 reasoning"). As a corollary, we prove the assumption made by Jerabek (2011) that a construction of certain bipartite expander graphs can be formalized in VNC^1. This in turn implies that every proof in Gentzen\u27s sequent calculus LK of a monotone sequent can be simulated in the monotone version of LK (MLK) with only polynomial blowup in proof size, strengthening the quasipolynomial simulation result of Atserias, Galesi, and Pudlak (2002)

    Polynomial-Time Solvers for the Discrete ∞\infty-Optimal Transport Problems

    Full text link
    In this note, we propose polynomial-time algorithms solving the Monge and Kantorovich formulations of the ∞\infty-optimal transport problem in the discrete and finite setting. It is the first time, to the best of our knowledge, that efficient numerical methods for these problems have been proposed

    Efficient Algorithms with Asymmetric Read and Write Costs

    Get PDF
    In several emerging technologies for computer memory (main memory), the cost of reading is significantly cheaper than the cost of writing. Such asymmetry in memory costs poses a fundamentally different model from the RAM for algorithm design. In this paper we study lower and upper bounds for various problems under such asymmetric read and write costs. We consider both the case in which all but O(1) memory has asymmetric cost, and the case of a small cache of symmetric memory. We model both cases using the (M,omega)-ARAM, in which there is a small (symmetric) memory of size M and a large unbounded (asymmetric) memory, both random access, and where reading from the large memory has unit cost, but writing has cost omega >> 1. For FFT and sorting networks we show a lower bound cost of Omega(omega*n*log_{omega*M}(n)), which indicates that it is not possible to achieve asymptotic improvements with cheaper reads when omega is bounded by a polynomial in M. Moreover, there is an asymptotic gap (of min(omega,log(n)/log(omega*M)) between the cost of sorting networks and comparison sorting in the model. This contrasts with the RAM, and most other models, in which the asymptotic costs are the same. We also show a lower bound for computations on an n*n diamond DAG of Omega(omega*n^2/M) cost, which indicates no asymptotic improvement is achievable with fast reads. However, we show that for the minimum edit distance problem (and related problems), which would seem to be a diamond DAG, we can beat this lower bound with an algorithm with only O(omega*n^2/(M*min(omega^{1/3},M^{1/2}))) cost. To achieve this we make use of a "path sketch" technique that is forbidden in a strict DAG computation. Finally, we show several interesting upper bounds for shortest path problems, minimum spanning trees, and other problems. A common theme in many of the upper bounds is that they require redundant computation and a tradeoff between reads and writes
    • …
    corecore