1,792 research outputs found
Construction of asymptotically good low-rate error-correcting codes through pseudo-random graphs
A novel technique, based on the pseudo-random properties of certain graphs known as expanders, is used to obtain novel simple explicit constructions of asymptotically good codes. In one of the constructions, the expanders are used to enhance Justesen codes by replicating, shuffling, and then regrouping the code coordinates. For any fixed (small) rate, and for a sufficiently large alphabet, the codes thus obtained lie above the Zyablov bound. Using these codes as outer codes in a concatenated scheme, a second asymptotic good construction is obtained which applies to small alphabets (say, GF(2)) as well. Although these concatenated codes lie below the Zyablov bound, they are still superior to previously known explicit constructions in the zero-rate neighborhood
Oblivious Transfer based on Key Exchange
Key-exchange protocols have been overlooked as a possible means for
implementing oblivious transfer (OT). In this paper we present a protocol for
mutual exchange of secrets, 1-out-of-2 OT and coin flipping similar to
Diffie-Hellman protocol using the idea of obliviously exchanging encryption
keys. Since, Diffie-Hellman scheme is widely used, our protocol may provide a
useful alternative to the conventional methods for implementation of oblivious
transfer and a useful primitive in building larger cryptographic schemes.Comment: 10 page
Locality of not-so-weak coloring
Many graph problems are locally checkable: a solution is globally feasible if
it looks valid in all constant-radius neighborhoods. This idea is formalized in
the concept of locally checkable labelings (LCLs), introduced by Naor and
Stockmeyer (1995). Recently, Chang et al. (2016) showed that in bounded-degree
graphs, every LCL problem belongs to one of the following classes:
- "Easy": solvable in rounds with both deterministic and
randomized distributed algorithms.
- "Hard": requires at least rounds with deterministic and
rounds with randomized distributed algorithms.
Hence for any parameterized LCL problem, when we move from local problems
towards global problems, there is some point at which complexity suddenly jumps
from easy to hard. For example, for vertex coloring in -regular graphs it is
now known that this jump is at precisely colors: coloring with colors
is easy, while coloring with colors is hard.
However, it is currently poorly understood where this jump takes place when
one looks at defective colorings. To study this question, we define -partial
-coloring as follows: nodes are labeled with numbers between and ,
and every node is incident to at least properly colored edges.
It is known that -partial -coloring (a.k.a. weak -coloring) is easy
for any . As our main result, we show that -partial -coloring
becomes hard as soon as , no matter how large a we have.
We also show that this is fundamentally different from -partial
-coloring: no matter which we choose, the problem is always hard
for but it becomes easy when . The same was known previously
for partial -coloring with , but the case of was open
Exact bounds for distributed graph colouring
We prove exact bounds on the time complexity of distributed graph colouring.
If we are given a directed path that is properly coloured with colours, by
prior work it is known that we can find a proper 3-colouring in communication rounds. We close the gap between upper and
lower bounds: we show that for infinitely many the time complexity is
precisely communication rounds.Comment: 16 pages, 3 figure
How Long It Takes for an Ordinary Node with an Ordinary ID to Output?
In the context of distributed synchronous computing, processors perform in
rounds, and the time-complexity of a distributed algorithm is classically
defined as the number of rounds before all computing nodes have output. Hence,
this complexity measure captures the running time of the slowest node(s). In
this paper, we are interested in the running time of the ordinary nodes, to be
compared with the running time of the slowest nodes. The node-averaged
time-complexity of a distributed algorithm on a given instance is defined as
the average, taken over every node of the instance, of the number of rounds
before that node output. We compare the node-averaged time-complexity with the
classical one in the standard LOCAL model for distributed network computing. We
show that there can be an exponential gap between the node-averaged
time-complexity and the classical time-complexity, as witnessed by, e.g.,
leader election. Our first main result is a positive one, stating that, in
fact, the two time-complexities behave the same for a large class of problems
on very sparse graphs. In particular, we show that, for LCL problems on cycles,
the node-averaged time complexity is of the same order of magnitude as the
slowest node time-complexity.
In addition, in the LOCAL model, the time-complexity is computed as a worst
case over all possible identity assignments to the nodes of the network. In
this paper, we also investigate the ID-averaged time-complexity, when the
number of rounds is averaged over all possible identity assignments. Our second
main result is that the ID-averaged time-complexity is essentially the same as
the expected time-complexity of randomized algorithms (where the expectation is
taken over all possible random bits used by the nodes, and the number of rounds
is measured for the worst-case identity assignment).
Finally, we study the node-averaged ID-averaged time-complexity.Comment: (Submitted) Journal versio
The cryptographic power of misaligned reference frames
Suppose that Alice and Bob define their coordinate axes differently, and the
change of reference frame between them is given by a probability distribution
mu over SO(3). We show that this uncertainty of reference frame is of no use
for bit commitment when mu is uniformly distributed over a (sub)group of SO(3),
but other choices of mu can give rise to a partially or even asymptotically
secure bit commitment.Comment: 4 pages Latex; v2 has a new referenc
Two-Source Dispersers for Polylogarithmic Entropy and Improved Ramsey Graphs
In his 1947 paper that inaugurated the probabilistic method, Erd\H{o}s proved
the existence of -Ramsey graphs on vertices. Matching Erd\H{o}s'
result with a constructive proof is a central problem in combinatorics, that
has gained a significant attention in the literature. The state of the art
result was obtained in the celebrated paper by Barak, Rao, Shaltiel and
Wigderson [Ann. Math'12], who constructed a
-Ramsey graph, for some small universal
constant .
In this work, we significantly improve the result of Barak~\etal and
construct -Ramsey graphs, for some universal constant .
In the language of theoretical computer science, our work resolves the problem
of explicitly constructing two-source dispersers for polylogarithmic entropy
Cryptographic Randomized Response Techniques
We develop cryptographically secure techniques to guarantee unconditional
privacy for respondents to polls. Our constructions are efficient and
practical, and are shown not to allow cheating respondents to affect the
``tally'' by more than their own vote -- which will be given the exact same
weight as that of other respondents. We demonstrate solutions to this problem
based on both traditional cryptographic techniques and quantum cryptography.Comment: 21 page
Secret-Sharing for NP
A computational secret-sharing scheme is a method that enables a dealer, that
has a secret, to distribute this secret among a set of parties such that a
"qualified" subset of parties can efficiently reconstruct the secret while any
"unqualified" subset of parties cannot efficiently learn anything about the
secret. The collection of "qualified" subsets is defined by a Boolean function.
It has been a major open problem to understand which (monotone) functions can
be realized by a computational secret-sharing schemes. Yao suggested a method
for secret-sharing for any function that has a polynomial-size monotone circuit
(a class which is strictly smaller than the class of monotone functions in P).
Around 1990 Rudich raised the possibility of obtaining secret-sharing for all
monotone functions in NP: In order to reconstruct the secret a set of parties
must be "qualified" and provide a witness attesting to this fact.
Recently, Garg et al. (STOC 2013) put forward the concept of witness
encryption, where the goal is to encrypt a message relative to a statement "x
in L" for a language L in NP such that anyone holding a witness to the
statement can decrypt the message, however, if x is not in L, then it is
computationally hard to decrypt. Garg et al. showed how to construct several
cryptographic primitives from witness encryption and gave a candidate
construction.
One can show that computational secret-sharing implies witness encryption for
the same language. Our main result is the converse: we give a construction of a
computational secret-sharing scheme for any monotone function in NP assuming
witness encryption for NP and one-way functions. As a consequence we get a
completeness theorem for secret-sharing: computational secret-sharing scheme
for any single monotone NP-complete function implies a computational
secret-sharing scheme for every monotone function in NP
- âŠ