126,816 research outputs found

    A New Multilayered PCP and the Hardness of Hypergraph Vertex Cover

    Get PDF
    Given a kk-uniform hyper-graph, the Ekk-Vertex-Cover problem is to find the smallest subset of vertices that intersects every hyper-edge. We present a new multilayered PCP construction that extends the Raz verifier. This enables us to prove that Ekk-Vertex-Cover is NP-hard to approximate within factor (k1ϵ)(k-1-\epsilon) for any k3k \geq 3 and any ϵ>0\epsilon>0. The result is essentially tight as this problem can be easily approximated within factor kk. Our construction makes use of the biased Long-Code and is analyzed using combinatorial properties of ss-wise tt-intersecting families of subsets

    On the Combinatorial Version of the Slepian-Wolf Problem

    Full text link
    We study the following combinatorial version of the Slepian-Wolf coding scheme. Two isolated Senders are given binary strings XX and YY respectively; the length of each string is equal to nn, and the Hamming distance between the strings is at most αn\alpha n. The Senders compress their strings and communicate the results to the Receiver. Then the Receiver must reconstruct both strings XX and YY. The aim is to minimise the lengths of the transmitted messages. For an asymmetric variant of this problem (where one of the Senders transmits the input string to the Receiver without compression) with deterministic encoding a nontrivial lower bound was found by A.Orlitsky and K.Viswanathany. In our paper we prove a new lower bound for the schemes with syndrome coding, where at least one of the Senders uses linear encoding of the input string. For the combinatorial Slepian-Wolf problem with randomized encoding the theoretical optimum of communication complexity was recently found by the first author, though effective protocols with optimal lengths of messages remained unknown. We close this gap and present a polynomial time randomized protocol that achieves the optimal communication complexity.Comment: 20 pages, 14 figures. Accepted to IEEE Transactions on Information Theory (June 2018

    Derandomized Construction of Combinatorial Batch Codes

    Full text link
    Combinatorial Batch Codes (CBCs), replication-based variant of Batch Codes introduced by Ishai et al. in STOC 2004, abstracts the following data distribution problem: nn data items are to be replicated among mm servers in such a way that any kk of the nn data items can be retrieved by reading at most one item from each server with the total amount of storage over mm servers restricted to NN. Given parameters m,c,m, c, and kk, where cc and kk are constants, one of the challenging problems is to construct cc-uniform CBCs (CBCs where each data item is replicated among exactly cc servers) which maximizes the value of nn. In this work, we present explicit construction of cc-uniform CBCs with Ω(mc1+1k)\Omega(m^{c-1+{1 \over k}}) data items. The construction has the property that the servers are almost regular, i.e., number of data items stored in each server is in the range [ncmn2ln(4m),ncm+n2ln(4m)][{nc \over m}-\sqrt{{n\over 2}\ln (4m)}, {nc \over m}+\sqrt{{n \over 2}\ln (4m)}]. The construction is obtained through better analysis and derandomization of the randomized construction presented by Ishai et al. Analysis reveals almost regularity of the servers, an aspect that so far has not been addressed in the literature. The derandomization leads to explicit construction for a wide range of values of cc (for given mm and kk) where no other explicit construction with similar parameters, i.e., with n=Ω(mc1+1k)n = \Omega(m^{c-1+{1 \over k}}), is known. Finally, we discuss possibility of parallel derandomization of the construction
    corecore