5 research outputs found

    Strongly Refuting Random CSPs Below the Spectral Threshold

    Full text link
    Random constraint satisfaction problems (CSPs) are known to exhibit threshold phenomena: given a uniformly random instance of a CSP with nn variables and mm clauses, there is a value of m=Ω(n)m = \Omega(n) beyond which the CSP will be unsatisfiable with high probability. Strong refutation is the problem of certifying that no variable assignment satisfies more than a constant fraction of clauses; this is the natural algorithmic problem in the unsatisfiable regime (when m/n=ω(1)m/n = \omega(1)). Intuitively, strong refutation should become easier as the clause density m/nm/n grows, because the contradictions introduced by the random clauses become more locally apparent. For CSPs such as kk-SAT and kk-XOR, there is a long-standing gap between the clause density at which efficient strong refutation algorithms are known, m/nO~(nk/21)m/n \ge \widetilde O(n^{k/2-1}), and the clause density at which instances become unsatisfiable with high probability, m/n=ω(1)m/n = \omega (1). In this paper, we give spectral and sum-of-squares algorithms for strongly refuting random kk-XOR instances with clause density m/nO~(n(k/21)(1δ))m/n \ge \widetilde O(n^{(k/2-1)(1-\delta)}) in time exp(O~(nδ))\exp(\widetilde O(n^{\delta})) or in O~(nδ)\widetilde O(n^{\delta}) rounds of the sum-of-squares hierarchy, for any δ[0,1)\delta \in [0,1) and any integer k3k \ge 3. Our algorithms provide a smooth transition between the clause density at which polynomial-time algorithms are known at δ=0\delta = 0, and brute-force refutation at the satisfiability threshold when δ=1\delta = 1. We also leverage our kk-XOR results to obtain strong refutation algorithms for SAT (or any other Boolean CSP) at similar clause densities. Our algorithms match the known sum-of-squares lower bounds due to Grigoriev and Schonebeck, up to logarithmic factors. Additionally, we extend our techniques to give new results for certifying upper bounds on the injective tensor norm of random tensors

    ENGINEERING COMPRESSED STATIC FUNCTIONS AND MINIMAL PERFECT HASH FUNCTIONS

    Get PDF
    \emph{Static functions} are data structures meant to store arbitrary mappings from finite sets to integers; that is, given universe of items UU, a set of nNn \in \mathbb{N} pairs (ki,vi)(k_i,v_i) where kiSU,S=nk_i \in S \subset U, |S|=n, and vi{0,1,,m1},mNv_i \in \{0, 1, \ldots, m-1\} , m \in \mathbb{N} , a static function will retrieve viv_i given kik_i (usually, in constant time). When every key is mapped into a different value this function is called \emph{perfect hash function} and when n=mn=m the data structure yields an injective numbering S{0,1,n1}S\to \lbrace0,1, \ldots n-1 \rbrace; this mapping is called a \emph{minimal perfect hash function}. Big data brought back one of the most critical challenges that computer scientists have been tackling during the last fifty years, that is, analyzing big amounts of data that do not fit in main memory. While for small keysets these mappings can be easily implemented using hash tables, this solution does not scale well for bigger sets. Static functions and MPHFs break the information-theoretical lower bound of storing the set SS because they are allowed to return \emph{any} value if the queried key is not in the original keyset. The classical constructions technique for static functions can achieve just O(nb)O(nb) bits space, where b=log(m)b=\log(m), and the one for MPHFs O(n)O(n) bits of space (always with constant access time). All these features make static functions and MPHFs powerful techniques when handling, for instance, large sets of strings, and they are essential building blocks of space-efficient data structures such as (compressed) full-text indexes, monotone MPHFs, Bloom filter-like data structures, and prefix-search data structures. The biggest challenge of this construction technique involves lowering the multiplicative constants hidden inside the asymptotic space bounds while keeping feasible construction times. In this thesis, we take advantage of the recent result in random linear systems theory regarding the ratio between the number of variables and number of the equations, and in perfect hash data structures, to achieve practical static functions with the lowest space bounds so far, and construction time comparable with widely used techniques. The new results, however, require solving linear systems that require more than a simple triangulation process, as it happens in current state-of-the-art solutions. The main challenge in making such structures usable is mitigating the cubic running time of Gaussian elimination at construction time. To this purpose, we introduce novel techniques based on \emph{broadword programming} and a heuristic derived from \emph{structured Gaussian elimination}. We obtained data structures that are significantly smaller than commonly used hypergraph-based constructions while maintaining or improving the lookup times and providing still feasible construction.We then apply these improvements to another kind of structures: \emph{compressed static hash functions}. The theoretical construction technique for this kind of data structure uses prefix-free codes with variable length to encode the set of values. Adopting this solution, we can reduce the\n space usage of each element to (essentially) the entropy of the list of output values of the function.Indeed, we need to solve an even bigger linear system of equations, and the time required to build the structure increases. In this thesis, we present the first engineered implementation of compressed hash functions. For example, we were able to store a function with geometrically distributed output, with parameter p=0.5p=0.5in just 2.282.28 bit per key, independently of the key set, with a construction time double with respect to that of a state-of-the-art non-compressed function, which requires loglogn\approx\log \log n bits per key, where nn is the number of keys, and similar lookup time. We can also store a function with an output distributed following a Zipfian distribution with parameter s=2s=2 and N=106N= 10^6 in just 2.752.75 bits per key, whereas a non-compressed function would require more than 2020, with a threefold increase in construction time and significantly faster lookups
    corecore