1,996 research outputs found
Communication with Imperfectly Shared Randomness
The communication complexity of many fundamental problems reduces greatly
when the communicating parties share randomness that is independent of the
inputs to the communication task. Natural communication processes (say between
humans) however often involve large amounts of shared correlations among the
communicating players, but rarely allow for perfect sharing of randomness. Can
the communication complexity benefit from shared correlations as well as it
does from shared randomness? This question was considered mainly in the context
of simultaneous communication by Bavarian et al. (ICALP 2014). In this work we
study this problem in the standard interactive setting and give some general
results. In particular, we show that every problem with communication
complexity of bits with perfectly shared randomness has a protocol using
imperfectly shared randomness with complexity bits. We also show that
this is best possible by exhibiting a promise problem with complexity bits
with perfectly shared randomness which requires bits when the
randomness is imperfectly shared. Along the way we also highlight some other
basic problems such as compression, and agreement distillation, where shared
randomness plays a central role and analyze the complexity of these problems in
the imperfectly shared randomness model.
The technical highlight of this work is the lower bound that goes into the
result showing the tightness of our general connection. This result builds on
the intuition that communication with imperfectly shared randomness needs to be
less sensitive to its random inputs than communication with perfectly shared
randomness. The formal proof invokes results about the small-set expansion of
the noisy hypercube and an invariance principle to convert this intuition to a
proof, thus giving a new application domain for these fundamental results.Comment: Updated some references and discussion w.r.t. previous wor
Communication Complexity of Permutation-Invariant Functions
Motivated by the quest for a broader understanding of communication
complexity of simple functions, we introduce the class of
"permutation-invariant" functions. A partial function is permutation-invariant if for every bijection
and every , it is the case that . Most of the commonly studied functions
in communication complexity are permutation-invariant. For such functions, we
present a simple complexity measure (computable in time polynomial in given
an implicit description of ) that describes their communication complexity
up to polynomial factors and up to an additive error that is logarithmic in the
input size. This gives a coarse taxonomy of the communication complexity of
simple functions. Our work highlights the role of the well-known lower bounds
of functions such as 'Set-Disjointness' and 'Indexing', while complementing
them with the relatively lesser-known upper bounds for 'Gap-Inner-Product'
(from the sketching literature) and 'Sparse-Gap-Inner-Product' (from the recent
work of Canonne et al. [ITCS 2015]). We also present consequences to the study
of communication complexity with imperfectly shared randomness where we show
that for total permutation-invariant functions, imperfectly shared randomness
results in only a polynomial blow-up in communication complexity after an
additive overhead
Communication with Contextual Uncertainty
We introduce a simple model illustrating the role of context in communication
and the challenge posed by uncertainty of knowledge of context. We consider a
variant of distributional communication complexity where Alice gets some
information and Bob gets , where is drawn from a known
distribution, and Bob wishes to compute some function (with high
probability over ). In our variant, Alice does not know , but only
knows some function which is an approximation of . Thus, the function
being computed forms the context for the communication, and knowing it
imperfectly models (mild) uncertainty in this context.
A naive solution would be for Alice and Bob to first agree on some common
function that is close to both and and then use a protocol for
to compute . We show that any such agreement leads to a large overhead
in communication ruling out such a universal solution.
In contrast, we show that if has a one-way communication protocol with
complexity in the standard setting, then it has a communication protocol
with complexity in the uncertain setting, where denotes
the mutual information between and . In the particular case where the
input distribution is a product distribution, the protocol in the uncertain
setting only incurs a constant factor blow-up in communication and error.
Furthermore, we show that the dependence on the mutual information is
required. Namely, we construct a class of functions along with a non-product
distribution over for which the communication complexity is a single
bit in the standard setting but at least bits in the
uncertain setting.Comment: 20 pages + 1 title pag
The Price of Uncertain Priors in Source Coding
We consider the problem of one-way communication when the recipient does not
know exactly the distribution that the messages are drawn from, but has a
"prior" distribution that is known to be close to the source distribution, a
problem first considered by Juba et al. We consider the question of how much
longer the messages need to be in order to cope with the uncertainty about the
receiver's prior and the source distribution, respectively, as compared to the
standard source coding problem. We consider two variants of this uncertain
priors problem: the original setting of Juba et al. in which the receiver is
required to correctly recover the message with probability 1, and a setting
introduced by Haramaty and Sudan, in which the receiver is permitted to fail
with some probability . In both settings, we obtain lower bounds that
are tight up to logarithmically smaller terms. In the latter setting, we
furthermore present a variant of the coding scheme of Juba et al. with an
overhead of bits, thus also establishing the
nearly tight upper bound.Comment: To appear in IEEE Transactions on Information Theor
The Power of Shared Randomness in Uncertain Communication
In a recent work (Ghazi et al., SODA 2016), the authors with Komargodski and Kothari initiated the study of communication with contextual uncertainty, a setup aiming to understand how efficient communication is possible when the communicating parties imperfectly share a huge context. In this setting, Alice is given a function f and an input string x, and Bob is given a function g and an input string y. The pair (x,y) comes from a known distribution mu and f and g are guaranteed to be close under this distribution. Alice and Bob wish to compute g(x,y) with high probability. The lack of agreement between Alice and Bob on the function that is being computed captures the uncertainty in the context. The previous work showed that any problem with one-way communication complexity k in the standard model (i.e., without uncertainty, in other words, under the promise that f=g) has public-coin communication at most O(k(1+I)) bits in the uncertain case, where I is the mutual information between x and y. Moreover, a lower bound of Omega(sqrt{I}) bits on the public-coin uncertain communication was also shown.
However, an important question that was left open is related to the power that public randomness brings to uncertain communication. Can Alice and Bob achieve efficient communication amid uncertainty without using public randomness? And how powerful are public-coin protocols in overcoming uncertainty? Motivated by these two questions:
- We prove the first separation between private-coin uncertain communication and public-coin uncertain communication. Namely, we exhibit a function class for which the communication in the standard model and the public-coin uncertain communication are O(1) while the private-coin uncertain communication is a growing function of n (the length of the inputs). This lower bound (proved with respect to the uniform distribution) is in sharp contrast with the case of public-coin uncertain communication which was shown by the previous work to be within a constant factor from the certain communication. This lower bound also implies the first separation between public-coin uncertain communication and deterministic uncertain communication. Interestingly, we also show that if Alice and Bob imperfectly share a sequence of random bits (a setup weaker than public randomness), then achieving a constant blow-up in communication is still possible.
- We improve the lower-bound of the previous work on public-coin uncertain communication. Namely, we exhibit a function class and a distribution (with mutual information I approx n) for which the one-way certain communication is k bits but the one-way public-coin uncertain communication is at least Omega(sqrt{k}*sqrt{I}) bits.
Our proofs introduce new problems in the standard communication complexity model and prove lower bounds for these problems. Both the problems and the lower bound techniques may be of general interest
Communication over an Arbitrarily Varying Channel under a State-Myopic Encoder
We study the problem of communication over a discrete arbitrarily varying
channel (AVC) when a noisy version of the state is known non-causally at the
encoder. The state is chosen by an adversary which knows the coding scheme. A
state-myopic encoder observes this state non-causally, though imperfectly,
through a noisy discrete memoryless channel (DMC). We first characterize the
capacity of this state-dependent channel when the encoder-decoder share
randomness unknown to the adversary, i.e., the randomized coding capacity.
Next, we show that when only the encoder is allowed to randomize, the capacity
remains unchanged when positive. Interesting and well-known special cases of
the state-myopic encoder model are also presented.Comment: 16 page
Guessing a password over a wireless channel (on the effect of noise non-uniformity)
A string is sent over a noisy channel that erases some of its characters.
Knowing the statistical properties of the string's source and which characters
were erased, a listener that is equipped with an ability to test the veracity
of a string, one string at a time, wishes to fill in the missing pieces. Here
we characterize the influence of the stochastic properties of both the string's
source and the noise on the channel on the distribution of the number of
attempts required to identify the string, its guesswork. In particular, we
establish that the average noise on the channel is not a determining factor for
the average guesswork and illustrate simple settings where one recipient with,
on average, a better channel than another recipient, has higher average
guesswork. These results stand in contrast to those for the capacity of wiretap
channels and suggest the use of techniques such as friendly jamming with
pseudo-random sequences to exploit this guesswork behavior.Comment: Asilomar Conference on Signals, Systems & Computers, 201
- …