31 research outputs found
Dirty Paper Arbitrarily Varying Channel with a State-Aware Adversary
In this paper, we take an arbitrarily varying channel (AVC) approach to
examine the problem of writing on a dirty paper in the presence of an
adversary. We consider an additive white Gaussian noise (AWGN) channel with an
additive white Gaussian state, where the state is known non-causally to the
encoder and the adversary, but not the decoder. We determine the randomized
coding capacity of this AVC under the maximal probability of error criterion.
Interestingly, it is shown that the jamming adversary disregards the state
knowledge to choose a white Gaussian channel input which is independent of the
state
On AVCs with Quadratic Constraints
In this work we study an Arbitrarily Varying Channel (AVC) with quadratic
power constraints on the transmitter and a so-called "oblivious" jammer (along
with additional AWGN) under a maximum probability of error criterion, and no
private randomness between the transmitter and the receiver. This is in
contrast to similar AVC models under the average probability of error criterion
considered in [1], and models wherein common randomness is allowed [2] -- these
distinctions are important in some communication scenarios outlined below.
We consider the regime where the jammer's power constraint is smaller than
the transmitter's power constraint (in the other regime it is known no positive
rate is possible). For this regime we show the existence of stochastic codes
(with no common randomness between the transmitter and receiver) that enables
reliable communication at the same rate as when the jammer is replaced with
AWGN with the same power constraint. This matches known information-theoretic
outer bounds. In addition to being a stronger result than that in [1] (enabling
recovery of the results therein), our proof techniques are also somewhat more
direct, and hence may be of independent interest.Comment: A shorter version of this work will be send to ISIT13, Istanbul. 8
pages, 3 figure
Lattice Erasure Codes of Low Rank with Noise Margins
We consider the following generalization of an MDS code for
application to an erasure channel with additive noise. Like an MDS code, our
code is required to be decodable from any received symbols, in the absence
of noise. In addition, we require that the noise margin for every allowable
erasure pattern be as large as possible and that the code satisfy a power
constraint. In this paper we derive performance bounds and present a few
designs for low rank lattice codes for an additive noise channel with erasures
Universal decoders for channels with memory
Caption title.Includes bibliographical references (p. 14-15).Meir Feder and Amos Lapidoth
Near-Optimal Algorithms for Differentially-Private Principal Components
Principal components analysis (PCA) is a standard tool for identifying good
low-dimensional approximations to data in high dimension. Many data sets of
interest contain private or sensitive information about individuals. Algorithms
which operate on such data should be sensitive to the privacy risks in
publishing their outputs. Differential privacy is a framework for developing
tradeoffs between privacy and the utility of these outputs. In this paper we
investigate the theory and empirical performance of differentially private
approximations to PCA and propose a new method which explicitly optimizes the
utility of the output. We show that the sample complexity of the proposed
method differs from the existing procedure in the scaling with the data
dimension, and that our method is nearly optimal in terms of this scaling. We
furthermore illustrate our results, showing that on real data there is a large
performance gap between the existing method and our method.Comment: 37 pages, 8 figures; final version to appear in the Journal of
Machine Learning Research, preliminary version was at NIPS 201