1,792 research outputs found

    Construction of asymptotically good low-rate error-correcting codes through pseudo-random graphs

    Get PDF
    A novel technique, based on the pseudo-random properties of certain graphs known as expanders, is used to obtain novel simple explicit constructions of asymptotically good codes. In one of the constructions, the expanders are used to enhance Justesen codes by replicating, shuffling, and then regrouping the code coordinates. For any fixed (small) rate, and for a sufficiently large alphabet, the codes thus obtained lie above the Zyablov bound. Using these codes as outer codes in a concatenated scheme, a second asymptotic good construction is obtained which applies to small alphabets (say, GF(2)) as well. Although these concatenated codes lie below the Zyablov bound, they are still superior to previously known explicit constructions in the zero-rate neighborhood

    Oblivious Transfer based on Key Exchange

    Full text link
    Key-exchange protocols have been overlooked as a possible means for implementing oblivious transfer (OT). In this paper we present a protocol for mutual exchange of secrets, 1-out-of-2 OT and coin flipping similar to Diffie-Hellman protocol using the idea of obliviously exchanging encryption keys. Since, Diffie-Hellman scheme is widely used, our protocol may provide a useful alternative to the conventional methods for implementation of oblivious transfer and a useful primitive in building larger cryptographic schemes.Comment: 10 page

    Locality of not-so-weak coloring

    Get PDF
    Many graph problems are locally checkable: a solution is globally feasible if it looks valid in all constant-radius neighborhoods. This idea is formalized in the concept of locally checkable labelings (LCLs), introduced by Naor and Stockmeyer (1995). Recently, Chang et al. (2016) showed that in bounded-degree graphs, every LCL problem belongs to one of the following classes: - "Easy": solvable in O(log⁡∗n)O(\log^* n) rounds with both deterministic and randomized distributed algorithms. - "Hard": requires at least Ω(log⁥n)\Omega(\log n) rounds with deterministic and Ω(log⁥log⁥n)\Omega(\log \log n) rounds with randomized distributed algorithms. Hence for any parameterized LCL problem, when we move from local problems towards global problems, there is some point at which complexity suddenly jumps from easy to hard. For example, for vertex coloring in dd-regular graphs it is now known that this jump is at precisely dd colors: coloring with d+1d+1 colors is easy, while coloring with dd colors is hard. However, it is currently poorly understood where this jump takes place when one looks at defective colorings. To study this question, we define kk-partial cc-coloring as follows: nodes are labeled with numbers between 11 and cc, and every node is incident to at least kk properly colored edges. It is known that 11-partial 22-coloring (a.k.a. weak 22-coloring) is easy for any d≄1d \ge 1. As our main result, we show that kk-partial 22-coloring becomes hard as soon as k≄2k \ge 2, no matter how large a dd we have. We also show that this is fundamentally different from kk-partial 33-coloring: no matter which k≄3k \ge 3 we choose, the problem is always hard for d=kd = k but it becomes easy when d≫kd \gg k. The same was known previously for partial cc-coloring with c≄4c \ge 4, but the case of c<4c < 4 was open

    Exact bounds for distributed graph colouring

    Full text link
    We prove exact bounds on the time complexity of distributed graph colouring. If we are given a directed path that is properly coloured with nn colours, by prior work it is known that we can find a proper 3-colouring in 12log⁡∗(n)±O(1)\frac{1}{2} \log^*(n) \pm O(1) communication rounds. We close the gap between upper and lower bounds: we show that for infinitely many nn the time complexity is precisely 12log⁡∗n\frac{1}{2} \log^* n communication rounds.Comment: 16 pages, 3 figure

    How Long It Takes for an Ordinary Node with an Ordinary ID to Output?

    Full text link
    In the context of distributed synchronous computing, processors perform in rounds, and the time-complexity of a distributed algorithm is classically defined as the number of rounds before all computing nodes have output. Hence, this complexity measure captures the running time of the slowest node(s). In this paper, we are interested in the running time of the ordinary nodes, to be compared with the running time of the slowest nodes. The node-averaged time-complexity of a distributed algorithm on a given instance is defined as the average, taken over every node of the instance, of the number of rounds before that node output. We compare the node-averaged time-complexity with the classical one in the standard LOCAL model for distributed network computing. We show that there can be an exponential gap between the node-averaged time-complexity and the classical time-complexity, as witnessed by, e.g., leader election. Our first main result is a positive one, stating that, in fact, the two time-complexities behave the same for a large class of problems on very sparse graphs. In particular, we show that, for LCL problems on cycles, the node-averaged time complexity is of the same order of magnitude as the slowest node time-complexity. In addition, in the LOCAL model, the time-complexity is computed as a worst case over all possible identity assignments to the nodes of the network. In this paper, we also investigate the ID-averaged time-complexity, when the number of rounds is averaged over all possible identity assignments. Our second main result is that the ID-averaged time-complexity is essentially the same as the expected time-complexity of randomized algorithms (where the expectation is taken over all possible random bits used by the nodes, and the number of rounds is measured for the worst-case identity assignment). Finally, we study the node-averaged ID-averaged time-complexity.Comment: (Submitted) Journal versio

    Derandomization, Witnesses for Boolean Matrix Multiplication and Construction of Perfect Hash Functions

    Full text link

    The cryptographic power of misaligned reference frames

    Full text link
    Suppose that Alice and Bob define their coordinate axes differently, and the change of reference frame between them is given by a probability distribution mu over SO(3). We show that this uncertainty of reference frame is of no use for bit commitment when mu is uniformly distributed over a (sub)group of SO(3), but other choices of mu can give rise to a partially or even asymptotically secure bit commitment.Comment: 4 pages Latex; v2 has a new referenc

    Cryptographic Randomized Response Techniques

    Full text link
    We develop cryptographically secure techniques to guarantee unconditional privacy for respondents to polls. Our constructions are efficient and practical, and are shown not to allow cheating respondents to affect the ``tally'' by more than their own vote -- which will be given the exact same weight as that of other respondents. We demonstrate solutions to this problem based on both traditional cryptographic techniques and quantum cryptography.Comment: 21 page

    Two-Source Dispersers for Polylogarithmic Entropy and Improved Ramsey Graphs

    Full text link
    In his 1947 paper that inaugurated the probabilistic method, Erd\H{o}s proved the existence of 2log⁥n2\log{n}-Ramsey graphs on nn vertices. Matching Erd\H{o}s' result with a constructive proof is a central problem in combinatorics, that has gained a significant attention in the literature. The state of the art result was obtained in the celebrated paper by Barak, Rao, Shaltiel and Wigderson [Ann. Math'12], who constructed a 22(log⁥log⁥n)1−α2^{2^{(\log\log{n})^{1-\alpha}}}-Ramsey graph, for some small universal constant α>0\alpha > 0. In this work, we significantly improve the result of Barak~\etal and construct 2(log⁥log⁥n)c2^{(\log\log{n})^c}-Ramsey graphs, for some universal constant cc. In the language of theoretical computer science, our work resolves the problem of explicitly constructing two-source dispersers for polylogarithmic entropy

    Secret-Sharing for NP

    Get PDF
    A computational secret-sharing scheme is a method that enables a dealer, that has a secret, to distribute this secret among a set of parties such that a "qualified" subset of parties can efficiently reconstruct the secret while any "unqualified" subset of parties cannot efficiently learn anything about the secret. The collection of "qualified" subsets is defined by a Boolean function. It has been a major open problem to understand which (monotone) functions can be realized by a computational secret-sharing schemes. Yao suggested a method for secret-sharing for any function that has a polynomial-size monotone circuit (a class which is strictly smaller than the class of monotone functions in P). Around 1990 Rudich raised the possibility of obtaining secret-sharing for all monotone functions in NP: In order to reconstruct the secret a set of parties must be "qualified" and provide a witness attesting to this fact. Recently, Garg et al. (STOC 2013) put forward the concept of witness encryption, where the goal is to encrypt a message relative to a statement "x in L" for a language L in NP such that anyone holding a witness to the statement can decrypt the message, however, if x is not in L, then it is computationally hard to decrypt. Garg et al. showed how to construct several cryptographic primitives from witness encryption and gave a candidate construction. One can show that computational secret-sharing implies witness encryption for the same language. Our main result is the converse: we give a construction of a computational secret-sharing scheme for any monotone function in NP assuming witness encryption for NP and one-way functions. As a consequence we get a completeness theorem for secret-sharing: computational secret-sharing scheme for any single monotone NP-complete function implies a computational secret-sharing scheme for every monotone function in NP
    • 

    corecore