11 research outputs found
Generalized List Decoding
This paper concerns itself with the question of list decoding for general
adversarial channels, e.g., bit-flip () channels, erasure
channels, (-) channels, channels, real adder
channels, noisy typewriter channels, etc. We precisely characterize when
exponential-sized (or positive rate) -list decodable codes (where the
list size is a universal constant) exist for such channels. Our criterion
asserts that:
"For any given general adversarial channel, it is possible to construct
positive rate -list decodable codes if and only if the set of completely
positive tensors of order- with admissible marginals is not entirely
contained in the order- confusability set associated to the channel."
The sufficiency is shown via random code construction (combined with
expurgation or time-sharing). The necessity is shown by
1. extracting equicoupled subcodes (generalization of equidistant code) from
any large code sequence using hypergraph Ramsey's theorem, and
2. significantly extending the classic Plotkin bound in coding theory to list
decoding for general channels using duality between the completely positive
tensor cone and the copositive tensor cone. In the proof, we also obtain a new
fact regarding asymmetry of joint distributions, which be may of independent
interest.
Other results include
1. List decoding capacity with asymptotically large for general
adversarial channels;
2. A tight list size bound for most constant composition codes
(generalization of constant weight codes);
3. Rederivation and demystification of Blinovsky's [Bli86] characterization
of the list decoding Plotkin points (threshold at which large codes are
impossible);
4. Evaluation of general bounds ([WBBJ]) for unique decoding in the error
correction code setting
Recommended from our members
Vowel Harmony Viewed as Error-Correcting Code
Robustness reduces the risk of information loss. At present the notion of error-correcting codes (ECCs) is used to achieve robustness in technical fields only. Viewing fault-tolerant natural systems as systems equipped with error-correcting codes permits a formal comparison of natural and technical robustness.
Instancing natural language (NL), we show differences in technical and natural error-correcting approaches. By picking a specific grammar phenomenon which some NLs exhibit – vowel harmony (VH) – we show that (1) VH can be formalized as an ECC as well as (2) VH adds to the robustness of its NL. We provide empirical as well as formal evidence on this fact. (3) Consequently, the example of VH shows that the notion of an ECC serves as a suitable formal model not only for technical but also for natural robustness
Zero-Rate Thresholds and New Capacity Bounds for List-Decoding and List-Recovery
In this work we consider the list-decodability and list-recoverability of arbitrary q-ary codes, for all integer values of q ? 2. A code is called (p,L)_q-list-decodable if every radius pn Hamming ball contains less than L codewords; (p,?,L)_q-list-recoverability is a generalization where we place radius pn Hamming balls on every point of a combinatorial rectangle with side length ? and again stipulate that there be less than L codewords.
Our main contribution is to precisely calculate the maximum value of p for which there exist infinite families of positive rate (p,?,L)_q-list-recoverable codes, the quantity we call the zero-rate threshold. Denoting this value by p_*, we in fact show that codes correcting a p_*+? fraction of errors must have size O_?(1), i.e., independent of n. Such a result is typically referred to as a "Plotkin bound." To complement this, a standard random code with expurgation construction shows that there exist positive rate codes correcting a p_*-? fraction of errors. We also follow a classical proof template (typically attributed to Elias and Bassalygo) to derive from the zero-rate threshold other tradeoffs between rate and decoding radius for list-decoding and list-recovery.
Technically, proving the Plotkin bound boils down to demonstrating the Schur convexity of a certain function defined on the q-simplex as well as the convexity of a univariate function derived from it. We remark that an earlier argument claimed similar results for q-ary list-decoding; however, we point out that this earlier proof is flawed
Multiple Packing: Lower Bounds via Infinite Constellations
We study the problem of high-dimensional multiple packing in Euclidean space.
Multiple packing is a natural generalization of sphere packing and is defined
as follows. Let and . A multiple packing is a
set of points in such that any point in lies in the intersection of at most balls of radius around points in . Given a well-known connection
with coding theory, multiple packings can be viewed as the Euclidean analog of
list-decodable codes, which are well-studied for finite fields. In this paper,
we derive the best known lower bounds on the optimal density of list-decodable
infinite constellations for constant under a stronger notion called
average-radius multiple packing. To this end, we apply tools from
high-dimensional geometry and large deviation theory.Comment: The paper arXiv:2107.05161 has been split into three parts with new
results added and significant revision. This paper is one of the three parts.
The other two are arXiv:2211.04408 and arXiv:2211.0440
Multiple Packing: Lower and Upper Bounds
We study the problem of high-dimensional multiple packing in Euclidean space.
Multiple packing is a natural generalization of sphere packing and is defined
as follows. Let and . A multiple packing is a
set of points in such that any point in lies in the intersection of at most balls of radius around points in . We study the multiple packing
problem for both bounded point sets whose points have norm at most
for some constant and unbounded point sets whose points are allowed to be
anywhere in . Given a well-known connection with coding theory,
multiple packings can be viewed as the Euclidean analog of list-decodable
codes, which are well-studied for finite fields. In this paper, we derive
various bounds on the largest possible density of a multiple packing in both
bounded and unbounded settings. A related notion called average-radius multiple
packing is also studied. Some of our lower bounds exactly pin down the
asymptotics of certain ensembles of average-radius list-decodable codes, e.g.,
(expurgated) Gaussian codes and (expurgated) spherical codes. In particular,
our lower bound obtained from spherical codes is the best known lower bound on
the optimal multiple packing density and is the first lower bound that
approaches the known large limit under the average-radius notion of
multiple packing. To derive these results, we apply tools from high-dimensional
geometry and large deviation theory.Comment: The paper arXiv:2107.05161 has been split into three parts with new
results added and significant revision. This paper is one of the three parts.
The other two are arXiv:2211.04408 and arXiv:2211.0440
Multiple Packing: Lower Bounds via Error Exponents
We derive lower bounds on the maximal rates for multiple packings in
high-dimensional Euclidean spaces. Multiple packing is a natural generalization
of the sphere packing problem. For any and , a
multiple packing is a set of points in such that
any point in lies in the intersection of at most balls
of radius around points in . We study this problem
for both bounded point sets whose points have norm at most for some
constant and unbounded point sets whose points are allowed to be anywhere
in . Given a well-known connection with coding theory, multiple
packings can be viewed as the Euclidean analog of list-decodable codes, which
are well-studied for finite fields. We derive the best known lower bounds on
the optimal multiple packing density. This is accomplished by establishing a
curious inequality which relates the list-decoding error exponent for additive
white Gaussian noise channels, a quantity of average-case nature, to the
list-decoding radius, a quantity of worst-case nature. We also derive various
bounds on the list-decoding error exponent in both bounded and unbounded
settings which are of independent interest beyond multiple packing.Comment: The paper arXiv:2107.05161 has been split into three parts with new
results added and significant revision. This paper is one of the three parts.
The other two are arXiv:2211.04407 and arXiv:2211.0440
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Information-theoretic limitations of distributed information processing
In a generic distributed information processing system, a number of agents connected by communication channels aim to accomplish a task collectively through local communications. The fundamental limits of distributed information processing problems depend not only on the intrinsic difficulty of the task, but also on the communication constraints due to the distributedness. In this thesis, we reveal these dependencies quantitatively under information-theoretic frameworks.
We consider three typical distributed information processing problems: decentralized parameter estimation, distributed function computation, and statistical learning under adaptive composition. For the first two problems, we derive converse results on the Bayes risk and the computation time, respectively. For the last problem, we first study the relationship between the generalization capability of a learning algorithm and its stability property measured by the mutual information between its input and output, and then derive achievability results on the generalization error of adaptively composed learning algorithms. In all cases, we obtain general results on the fundamental limits with respect to a general model of the problem, so that the results can be applied to various specific scenarios. Our information-theoretic analyses also provide general approaches to inferring global properties of a distributed information processing system from local properties of its components