61 research outputs found

    On Hashing-Based Approaches to Approximate DNF-Counting

    Get PDF
    Propositional model counting is a fundamental problem in artificial intelligence with a wide variety of applications, such as probabilistic inference, decision making under uncertainty, and probabilistic databases. Consequently, the problem is of theoretical as well as practical interest. When the constraints are expressed as DNF formulas, Monte Carlo-based techniques have been shown to provide a fully polynomial randomized approximation scheme (FPRAS). For CNF constraints, hashing-based approximation techniques have been demonstrated to be highly successful. Furthermore, it was shown that hashing-based techniques also yield an FPRAS for DNF counting without usage of Monte Carlo sampling. Our analysis, however, shows that the proposed hashing-based approach to DNF counting provides poor time complexity compared to the Monte Carlo-based DNF counting techniques. Given the success of hashing-based techniques for CNF constraints, it is natural to ask: Can hashing-based techniques provide an efficient FPRAS for DNF counting? In this paper, we provide a positive answer to this question. To this end, we introduce two novel algorithmic techniques: Symbolic Hashing and Stochastic Cell Counting, along with a new hash family of Row-Echelon hash functions. These innovations allow us to design a hashing-based FPRAS for DNF counting of similar complexity (up to polylog factors) as that of prior works. Furthermore, we expect these techniques to have potential applications beyond DNF counting

    Fast Pseudorandom Functions Based on Expander Graphs

    Get PDF
    We present direct constructions of pseudorandom function (PRF) families based on Goldreich\u27s one-way function. Roughly speaking, we assume that non-trivial local mappings f:{0,1}n{0,1}mf:\{0,1\}^n\rightarrow \{0,1\}^m whose input-output dependencies graph form an expander are hard to invert. We show that this one-wayness assumption yields PRFs with relatively low complexity. This includes weak PRFs which can be computed in linear time of O(n)O(n) on a RAM machine with O(logn)O(\log n) word size, or by a depth-3 circuit with unbounded fan-in AND and OR gates (AC0 circuit), and standard PRFs that can be computed by a quasilinear size circuit or by a constant-depth circuit with unbounded fan-in AND, OR and Majority gates (TC0). Our proofs are based on a new search-to-decision reduction for expander-based functions. This extends a previous reduction of the first author (STOC 2012) which was applicable for the special case of \emph{random} local functions. Additionally, we present a new family of highly efficient hash functions whose output on exponentially many inputs jointly forms (with high probability) a good expander graph. These hash functions are based on the techniques of Miles and Viola (Crypto 2012). Although some of our reductions provide only relatively weak security guarantees, we believe that they yield novel approach for constructing PRFs, and therefore enrich the study of pseudorandomness

    On Strong Diameter Padded Decompositions

    Get PDF
    Given a weighted graph G=(V,E,w), a partition of V is Delta-bounded if the diameter of each cluster is bounded by Delta. A distribution over Delta-bounded partitions is a beta-padded decomposition if every ball of radius gamma Delta is contained in a single cluster with probability at least e^{-beta * gamma}. The weak diameter of a cluster C is measured w.r.t. distances in G, while the strong diameter is measured w.r.t. distances in the induced graph G[C]. The decomposition is weak/strong according to the diameter guarantee. Formerly, it was proven that K_r free graphs admit weak decompositions with padding parameter O(r), while for strong decompositions only O(r^2) padding parameter was known. Furthermore, for the case of a graph G, for which the induced shortest path metric d_G has doubling dimension ddim, a weak O(ddim)-padded decomposition was constructed, which is also known to be tight. For the case of strong diameter, nothing was known. We construct strong O(r)-padded decompositions for K_r free graphs, matching the state of the art for weak decompositions. Similarly, for graphs with doubling dimension ddim we construct a strong O(ddim)-padded decomposition, which is also tight. We use this decomposition to construct (O(ddim),O~(ddim))-sparse cover scheme for such graphs. Our new decompositions and cover have implications to approximating unique games, the construction of light and sparse spanners, and for path reporting distance oracles

    Circuit-ABE from LWE: Unbounded Attributes and Semi-adaptive Security

    Get PDF
    We construct an LWE-based key-policy attribute-based encryption (ABE) scheme that supports attributes of unbounded polynomial length. Namely, the size of the public parameters is a fixed polynomial in the security parameter and a depth bound, and with these fixed length parameters, one can encrypt attributes of arbitrary length. Similarly, any polynomial size circuit that adheres to the depth bound can be used as the policy circuit regardless of its input length (recall that a depth d circuit can have as many as 2d inputs). This is in contrast to previous LWE-based schemes where the length of the public parameters has to grow linearly with the maximal attribute length. We prove that our scheme is semi-adaptively secure, namely, the adversary can choose the challenge attribute after seeing the public parameters (but before any decryption keys). Previous LWE-based constructions were only able to achieve selective security. (We stress that the “complexity leveraging” technique is not applicable for unbounded attributes). We believe that our techniques are of interest at least as much as our end result. Fundamentally, selective security and bounded attributes are both shortcomings that arise out of the current LWE proof techniques that program the challenge attributes into the public parameters. The LWE toolbox we develop in this work allows us to delay this programming. In a nutshell, the new tools include a way to generate an a-priori unbounded sequence of LWE matrices, and have fine-grained control over which trapdoor is embedded in each and every one of them, all with succinct representation.National Science Foundation (U.S.) (Award CNS-1350619)National Science Foundation (U.S.) (Grant CNS-1413964)United States-Israel Binational Science Foundation (Grant 712307

    Scheduling Problems over Network of Machines

    Get PDF
    We consider scheduling problems in which jobs need to be processed through a (shared) network of machines. The network is given in the form of a graph the edges of which represent the machines. We are also given a set of jobs, each specified by its processing time and a path in the graph. Every job needs to be processed in the order of edges specified by its path. We assume that jobs can wait between machines and preemption is not allowed; that is, once a job is started being processed on a machine, it must be completed without interruption. Every machine can only process one job at a time. The makespan of a schedule is the earliest time by which all the jobs have finished processing. The flow time (a.k.a. the completion time) of a job in a schedule is the difference in time between when it finishes processing on its last machine and when the it begins processing on its first machine. The total flow time (or the sum of completion times) is the sum of flow times (or completion times) of all jobs. Our focus is on finding schedules with the minimum sum of completion times or minimum makespan. In this paper, we develop several algorithms (both approximate and exact) for the problem both on general graphs and when the underlying graph of machines is a tree. Even in the very special case when the underlying network is a simple star, the problem is very interesting as it models a biprocessor scheduling with applications to data migration

    Proof Complexity of Propositional Model Counting

    Get PDF
    Recently, the proof system MICE for the model counting problem #SAT was introduced by Fichte, Hecher and Roland (SAT\u2722). As demonstrated by Fichte et al., the system MICE can be used for proof logging for state-of-the-art #SAT solvers. We perform a proof-complexity study of MICE. For this we first simplify the rules of MICE and obtain a calculus MICE\u27 that is polynomially equivalent to MICE. Our main result establishes an exponential lower bound for the number of proof steps in MICE\u27 (and hence also in MICE) for a specific family of CNFs

    Metric Embedding via Shortest Path Decompositions

    Full text link
    We study the problem of embedding shortest-path metrics of weighted graphs into p\ell_p spaces. We introduce a new embedding technique based on low-depth decompositions of a graph via shortest paths. The notion of Shortest Path Decomposition depth is inductively defined: A (weighed) path graph has shortest path decomposition (SPD) depth 11. General graph has an SPD of depth kk if it contains a shortest path whose deletion leads to a graph, each of whose components has SPD depth at most k1k-1. In this paper we give an O(kmin{1p,12})O(k^{\min\{\frac{1}{p},\frac{1}{2}\}})-distortion embedding for graphs of SPD depth at most kk. This result is asymptotically tight for any fixed p>1p>1, while for p=1p=1 it is tight up to second order terms. As a corollary of this result, we show that graphs having pathwidth kk embed into p\ell_p with distortion O(kmin{1p,12})O(k^{\min\{\frac{1}{p},\frac{1}{2}\}}). For p=1p=1, this improves over the best previous bound of Lee and Sidiropoulos that was exponential in kk; moreover, for other values of pp it gives the first embeddings whose distortion is independent of the graph size nn. Furthermore, we use the fact that planar graphs have SPD depth O(logn)O(\log n) to give a new proof that any planar graph embeds into 1\ell_1 with distortion O(logn)O(\sqrt{\log n}). Our approach also gives new results for graphs with bounded treewidth, and for graphs excluding a fixed minor
    corecore