47 research outputs found

    A Simple Gap-Producing Reduction for the Parameterized Set Cover Problem

    Get PDF
    Given an n-vertex bipartite graph I=(S,U,E), the goal of set cover problem is to find a minimum sized subset of S such that every vertex in U is adjacent to some vertex of this subset. It is NP-hard to approximate set cover to within a (1-o(1))ln n factor [I. Dinur and D. Steurer, 2014]. If we use the size of the optimum solution k as the parameter, then it can be solved in n^{k+o(1)} time [Eisenbrand and Grandoni, 2004]. A natural question is: can we approximate set cover to within an o(ln n) factor in n^{k-epsilon} time? In a recent breakthrough result[Karthik et al., 2018], Karthik, Laekhanukit and Manurangsi showed that assuming the Strong Exponential Time Hypothesis (SETH), for any computable function f, no f(k)* n^{k-epsilon}-time algorithm can approximate set cover to a factor below (log n)^{1/poly(k,e(epsilon))} for some function e. This paper presents a simple gap-producing reduction which, given a set cover instance I=(S,U,E) and two integers k<h <=(1-o(1))sqrt[k]{log |S|/log log |S|}, outputs a new set cover instance I\u27=(S,U\u27,E\u27) with |U\u27|=|U|^{h^k}|S|^{O(1)} in |U|^{h^k}* |S|^{O(1)} time such that - if I has a k-sized solution, then so does I\u27; - if I has no k-sized solution, then every solution of I\u27 must contain at least h vertices. Setting h=(1-o(1))sqrt[k]{log |S|/log log |S|}, we show that assuming SETH, for any computable function f, no f(k)* n^{k-epsilon}-time algorithm can distinguish between a set cover instance with k-sized solution and one whose minimum solution size is at least (1-o(1))* sqrt[k]((log n)/(log log n)). This improves the result in [Karthik et al., 2018]

    The Complexity of Public-Key Cryptography

    Get PDF
    We survey the computational foundations for public-key cryptography. We discuss the computational assumptions that have been used as bases for public-key encryption schemes, and the types of evidence we have for the veracity of these assumptions. This survey/tutorial was published in the book Tutorials on the Foundations of Cryptography , dedicated to Oded Goldreich on his 60th birthday

    Reliable and randomized data distribution strategies for large scale storage systems

    Get PDF
    The ever-growing amount of data requires highly scalable storage solutions. The most flexible approach is to use storage pools that can be expanded and scaled down by adding or removing storage devices. To make this approach usable, it is necessary to provide a solution to locate data items in such a dynamic environment. This paper presents and evaluates the Random Slicing strategy, which incorporates lessons learned from table-based, rule-based, and pseudo-randomized hashing strategies and is able to provide a simple and efficient strategy that scales up to handle exascale data. Random Slicing keeps a small table with information about previous storage system insert and remove operations, drastically reducing the required amount of randomness while delivering a perfect load distribution.Peer ReviewedPostprint (author’s final draft
    corecore