171 research outputs found

    Finite Volume Spaces and Sparsification

    Full text link
    We introduce and study finite dd-volumes - the high dimensional generalization of finite metric spaces. Having developed a suitable combinatorial machinery, we define 1\ell_1-volumes and show that they contain Euclidean volumes and hypertree volumes. We show that they can approximate any dd-volume with O(nd)O(n^d) multiplicative distortion. On the other hand, contrary to Bourgain's theorem for d=1d=1, there exists a 22-volume that on nn vertices that cannot be approximated by any 1\ell_1-volume with distortion smaller than Ω~(n1/5)\tilde{\Omega}(n^{1/5}). We further address the problem of 1\ell_1-dimension reduction in the context of 1\ell_1 volumes, and show that this phenomenon does occur, although not to the same striking degree as it does for Euclidean metrics and volumes. In particular, we show that any 1\ell_1 metric on nn points can be (1+ϵ)(1+ \epsilon)-approximated by a sum of O(n/ϵ2)O(n/\epsilon^2) cut metrics, improving over the best previously known bound of O(nlogn)O(n \log n) due to Schechtman. In order to deal with dimension reduction, we extend the techniques and ideas introduced by Karger and Bencz{\'u}r, and Spielman et al.~in the context of graph Sparsification, and develop general methods with a wide range of applications.Comment: previous revision was the wrong file: the new revision: changed (extended considerably) the treatment of finite volumes (see revised abstract). Inserted new applications for the sparsification technique

    Generative Models of Huge Objects

    Get PDF
    This work initiates the systematic study of explicit distributions that are indistinguishable from a single exponential-size combinatorial object. In this we extend the work of Goldreich, Goldwasser and Nussboim (SICOMP 2010) that focused on the implementation of huge objects that are indistinguishable from the uniform distribution, satisfying some global properties (which they coined truthfulness). Indistinguishability from a single object is motivated by the study of generative models in learning theory and regularity lemmas in graph theory. Problems that are well understood in the setting of pseudorandomness present significant challenges and at times are impossible when considering generative models of huge objects. We demonstrate the versatility of this study by providing a learning algorithm for huge indistinguishable objects in several natural settings including: dense functions and graphs with a truthfulness requirement on the number of ones in the function or edges in the graphs, and a version of the weak regularity lemma for sparse graphs that satisfy some global properties. These and other results generalize basic pseudorandom objects as well as notions introduced in algorithmic fairness. The results rely on notions and techniques from a variety of areas including learning theory, complexity theory, cryptography, and game theory

    The power of vertex sparsifiers in dynamic graph algorithms

    Get PDF
    We introduce a new algorithmic framework for designing dynamic graph algorithms in minor-free graphs, by exploiting the structure of such graphs and a tool called vertex sparsification, which is a way to compress large graphs into small ones that well preserve relevant properties among a subset of vertices and has previously mainly been used in the design of approximation algorithms. Using this framework, we obtain a Monte Carlo randomized fully dynamic algorithm for (1 + epsilon)-approximating the energy of electrical flows in n-vertex planar graphs with tilde{O}(r epsilon^{-2}) worst-case update time and tilde{O}((r + n / sqrt{r}) epsilon^{-2}) worst-case query time, for any r larger than some constant. For r=n^{2/3}, this gives tilde{O}(n^{2/3} epsilon^{-2}) update time and tilde{O}(n^{2/3} epsilon^{-2}) query time. We also extend this algorithm to work for minor-free graphs with similar approximation and running time guarantees. Furthermore, we illustrate our framework on the all-pairs max flow and shortest path problems by giving corresponding dynamic algorithms in minor-free graphs with both sublinear update and query times. To the best of our knowledge, our results are the first to systematically establish such a connection between dynamic graph algorithms and vertex sparsification. We also present both upper bound and lower bound for maintaining the energy of electrical flows in the incremental subgraph model, where updates consist of only vertex activations, which might be of independent interest

    The DLV System for Knowledge Representation and Reasoning

    Full text link
    This paper presents the DLV system, which is widely considered the state-of-the-art implementation of disjunctive logic programming, and addresses several aspects. As for problem solving, we provide a formal definition of its kernel language, function-free disjunctive logic programs (also known as disjunctive datalog), extended by weak constraints, which are a powerful tool to express optimization problems. We then illustrate the usage of DLV as a tool for knowledge representation and reasoning, describing a new declarative programming methodology which allows one to encode complex problems (up to Δ3P\Delta^P_3-complete problems) in a declarative fashion. On the foundational side, we provide a detailed analysis of the computational complexity of the language of DLV, and by deriving new complexity results we chart a complete picture of the complexity of this language and important fragments thereof. Furthermore, we illustrate the general architecture of the DLV system which has been influenced by these results. As for applications, we overview application front-ends which have been developed on top of DLV to solve specific knowledge representation tasks, and we briefly describe the main international projects investigating the potential of the system for industrial exploitation. Finally, we report about thorough experimentation and benchmarking, which has been carried out to assess the efficiency of the system. The experimental results confirm the solidity of DLV and highlight its potential for emerging application areas like knowledge management and information integration.Comment: 56 pages, 9 figures, 6 table

    Efficiently constructible huge graphs that preserve first order properties of random graphs

    No full text
    Abstract. We construct efficiently computable sequences of randomlooking graphs that preserve properties of the canonical random graphs G(2 n, p(n)). We focus on first-order graph properties, namely properties that can be expressed by a formula φ in the language where variables stand for vertices and the only relations are equality and adjacency (e.g. having an isolated vertex is a first-order property ∃x∀y(¬edge(x, y))). Random graphs are known to have remarkable structure w.r.t. first order properties, as indicated by the following 0/1 law: for a variety of choices of p(n), any fixed first-order property φ holds for G(2 n, p(n)) with probability tending either to 0 or to 1 as n grows to infinity. We first observe that similar 0/1 laws are satisfied by G(2 n, p(n)) even w.r.t. sequences of formulas {φn}n∈N with bounded quantifier depth, n depth(φn) ≤. We also demonstrate that 0/1 laws do not hold for lg(1/p(n)) random graphs w.r.t. properties of significantly larger quantifier depth. For most choices of p(n), we present efficient constructions of huge graphs with edge density nearly p(n) that emulate G(2 n, p(n)) by satisfying n lg(1/p(n)) Θ ()-0/1 laws. We show both probabilistic constructions (which also have other properties such as K-wise independence and being computationally indistinguishable from G(N, p(n))), and deterministic constructions where for each graph size we provide a specific graph that captures the properties of G(2 n, p(n)) for slightly smaller quantifier depths.

    Towards Next Generation Sequential and Parallel SAT Solvers

    Get PDF
    This thesis focuses on improving the SAT solving technology. The improvements focus on two major subjects: sequential SAT solving and parallel SAT solving. To better understand sequential SAT algorithms, the abstract reduction system Generic CDCL is introduced. With Generic CDCL, the soundness of solving techniques can be modeled. Next, the conflict driven clause learning algorithm is extended with the three techniques local look-ahead, local probing and all UIP learning that allow more global reasoning during search. These techniques improve the performance of the sequential SAT solver Riss. Then, the formula simplification techniques bounded variable addition, covered literal elimination and an advanced cardinality constraint extraction are introduced. By using these techniques, the reasoning of the overall SAT solving tool chain becomes stronger than plain resolution. When using these three techniques in the formula simplification tool Coprocessor before using Riss to solve a formula, the performance can be improved further. Due to the increasing number of cores in CPUs, the scalable parallel SAT solving approach iterative partitioning has been implemented in Pcasso for the multi-core architecture. Related work on parallel SAT solving has been studied to extract main ideas that can improve Pcasso. Besides parallel formula simplification with bounded variable elimination, the major extension is the extended clause sharing level based clause tagging, which builds the basis for conflict driven node killing. The latter allows to better identify unsatisfiable search space partitions. Another improvement is to combine scattering and look-ahead as a superior search space partitioning function. In combination with Coprocessor, the introduced extensions increase the performance of the parallel solver Pcasso. The implemented system turns out to be scalable for the multi-core architecture. Hence iterative partitioning is interesting for future parallel SAT solvers. The implemented solvers participated in international SAT competitions. In 2013 and 2014 Pcasso showed a good performance. Riss in combination with Copro- cessor won several first, second and third prices, including two Kurt-Gödel-Medals. Hence, the introduced algorithms improved modern SAT solving technology

    The logic of random regular graphs

    Full text link
    corecore