1,305 research outputs found
Trees and Markov convexity
We show that an infinite weighted tree admits a bi-Lipschitz embedding into
Hilbert space if and only if it does not contain arbitrarily large complete
binary trees with uniformly bounded distortion. We also introduce a new metric
invariant called Markov convexity, and show how it can be used to compute the
Euclidean distortion of any metric tree up to universal factors
Measured descent: A new embedding method for finite metrics
We devise a new embedding technique, which we call measured descent, based on
decomposing a metric space locally, at varying speeds, according to the density
of some probability measure. This provides a refined and unified framework for
the two primary methods of constructing Frechet embeddings for finite metrics,
due to [Bourgain, 1985] and [Rao, 1999]. We prove that any n-point metric space
(X,d) embeds in Hilbert space with distortion O(sqrt{alpha_X log n}), where
alpha_X is a geometric estimate on the decomposability of X. As an immediate
corollary, we obtain an O(sqrt{(log lambda_X) \log n}) distortion embedding,
where \lambda_X is the doubling constant of X. Since \lambda_X\le n, this
result recovers Bourgain's theorem, but when the metric X is, in a sense,
``low-dimensional,'' improved bounds are achieved.
Our embeddings are volume-respecting for subsets of arbitrary size. One
consequence is the existence of (k, O(log n)) volume-respecting embeddings for
all 1 \leq k \leq n, which is the best possible, and answers positively a
question posed by U. Feige. Our techniques are also used to answer positively a
question of Y. Rabinovich, showing that any weighted n-point planar graph
embeds in l_\infty^{O(log n)} with O(1) distortion. The O(log n) bound on the
dimension is optimal, and improves upon the previously known bound of O((log
n)^2).Comment: 17 pages. No figures. Appeared in FOCS '04. To appeaer in Geometric &
Functional Analysis. This version fixes a subtle error in Section 2.
Locality of not-so-weak coloring
Many graph problems are locally checkable: a solution is globally feasible if
it looks valid in all constant-radius neighborhoods. This idea is formalized in
the concept of locally checkable labelings (LCLs), introduced by Naor and
Stockmeyer (1995). Recently, Chang et al. (2016) showed that in bounded-degree
graphs, every LCL problem belongs to one of the following classes:
- "Easy": solvable in rounds with both deterministic and
randomized distributed algorithms.
- "Hard": requires at least rounds with deterministic and
rounds with randomized distributed algorithms.
Hence for any parameterized LCL problem, when we move from local problems
towards global problems, there is some point at which complexity suddenly jumps
from easy to hard. For example, for vertex coloring in -regular graphs it is
now known that this jump is at precisely colors: coloring with colors
is easy, while coloring with colors is hard.
However, it is currently poorly understood where this jump takes place when
one looks at defective colorings. To study this question, we define -partial
-coloring as follows: nodes are labeled with numbers between and ,
and every node is incident to at least properly colored edges.
It is known that -partial -coloring (a.k.a. weak -coloring) is easy
for any . As our main result, we show that -partial -coloring
becomes hard as soon as , no matter how large a we have.
We also show that this is fundamentally different from -partial
-coloring: no matter which we choose, the problem is always hard
for but it becomes easy when . The same was known previously
for partial -coloring with , but the case of was open
Exact bounds for distributed graph colouring
We prove exact bounds on the time complexity of distributed graph colouring.
If we are given a directed path that is properly coloured with colours, by
prior work it is known that we can find a proper 3-colouring in communication rounds. We close the gap between upper and
lower bounds: we show that for infinitely many the time complexity is
precisely communication rounds.Comment: 16 pages, 3 figure
How Long It Takes for an Ordinary Node with an Ordinary ID to Output?
In the context of distributed synchronous computing, processors perform in
rounds, and the time-complexity of a distributed algorithm is classically
defined as the number of rounds before all computing nodes have output. Hence,
this complexity measure captures the running time of the slowest node(s). In
this paper, we are interested in the running time of the ordinary nodes, to be
compared with the running time of the slowest nodes. The node-averaged
time-complexity of a distributed algorithm on a given instance is defined as
the average, taken over every node of the instance, of the number of rounds
before that node output. We compare the node-averaged time-complexity with the
classical one in the standard LOCAL model for distributed network computing. We
show that there can be an exponential gap between the node-averaged
time-complexity and the classical time-complexity, as witnessed by, e.g.,
leader election. Our first main result is a positive one, stating that, in
fact, the two time-complexities behave the same for a large class of problems
on very sparse graphs. In particular, we show that, for LCL problems on cycles,
the node-averaged time complexity is of the same order of magnitude as the
slowest node time-complexity.
In addition, in the LOCAL model, the time-complexity is computed as a worst
case over all possible identity assignments to the nodes of the network. In
this paper, we also investigate the ID-averaged time-complexity, when the
number of rounds is averaged over all possible identity assignments. Our second
main result is that the ID-averaged time-complexity is essentially the same as
the expected time-complexity of randomized algorithms (where the expectation is
taken over all possible random bits used by the nodes, and the number of rounds
is measured for the worst-case identity assignment).
Finally, we study the node-averaged ID-averaged time-complexity.Comment: (Submitted) Journal versio
Nonlinear spectral calculus and super-expanders
Nonlinear spectral gaps with respect to uniformly convex normed spaces are
shown to satisfy a spectral calculus inequality that establishes their decay
along Cesaro averages. Nonlinear spectral gaps of graphs are also shown to
behave sub-multiplicatively under zigzag products. These results yield a
combinatorial construction of super-expanders, i.e., a sequence of 3-regular
graphs that does not admit a coarse embedding into any uniformly convex normed
space.Comment: Typos fixed based on referee comments. Some of the results of this
paper were announced in arXiv:0910.2041. The corresponding parts of
arXiv:0910.2041 are subsumed by the current pape
Distributed Computing in the Asynchronous LOCAL model
The LOCAL model is among the main models for studying locality in the
framework of distributed network computing. This model is however subject to
pertinent criticisms, including the facts that all nodes wake up
simultaneously, perform in lock steps, and are failure-free. We show that
relaxing these hypotheses to some extent does not hurt local computing. In
particular, we show that, for any construction task associated to a locally
checkable labeling (LCL), if is solvable in rounds in the LOCAL model,
then remains solvable in rounds in the asynchronous LOCAL model.
This improves the result by Casta\~neda et al. [SSS 2016], which was restricted
to 3-coloring the rings. More generally, the main contribution of this paper is
to show that, perhaps surprisingly, asynchrony and failures in the computations
do not restrict the power of the LOCAL model, as long as the communications
remain synchronous and failure-free
Secret-Sharing for NP
A computational secret-sharing scheme is a method that enables a dealer, that
has a secret, to distribute this secret among a set of parties such that a
"qualified" subset of parties can efficiently reconstruct the secret while any
"unqualified" subset of parties cannot efficiently learn anything about the
secret. The collection of "qualified" subsets is defined by a Boolean function.
It has been a major open problem to understand which (monotone) functions can
be realized by a computational secret-sharing schemes. Yao suggested a method
for secret-sharing for any function that has a polynomial-size monotone circuit
(a class which is strictly smaller than the class of monotone functions in P).
Around 1990 Rudich raised the possibility of obtaining secret-sharing for all
monotone functions in NP: In order to reconstruct the secret a set of parties
must be "qualified" and provide a witness attesting to this fact.
Recently, Garg et al. (STOC 2013) put forward the concept of witness
encryption, where the goal is to encrypt a message relative to a statement "x
in L" for a language L in NP such that anyone holding a witness to the
statement can decrypt the message, however, if x is not in L, then it is
computationally hard to decrypt. Garg et al. showed how to construct several
cryptographic primitives from witness encryption and gave a candidate
construction.
One can show that computational secret-sharing implies witness encryption for
the same language. Our main result is the converse: we give a construction of a
computational secret-sharing scheme for any monotone function in NP assuming
witness encryption for NP and one-way functions. As a consequence we get a
completeness theorem for secret-sharing: computational secret-sharing scheme
for any single monotone NP-complete function implies a computational
secret-sharing scheme for every monotone function in NP
A new method for constructing small-bias spaces from Hermitian codes
We propose a new method for constructing small-bias spaces through a
combination of Hermitian codes. For a class of parameters our multisets are
much faster to construct than what can be achieved by use of the traditional
algebraic geometric code construction. So, if speed is important, our
construction is competitive with all other known constructions in that region.
And if speed is not a matter of interest the small-bias spaces of the present
paper still perform better than the ones related to norm-trace codes reported
in [12]
- …