23 research outputs found
Sharper Upper Bounds for Unbalanced Uniquely Decodable Code Pairs
Two sets form a Uniquely Decodable Code Pair
(UDCP) if every pair , yields a distinct sum , where
the addition is over . We show that every UDCP , with and , satisfies . For sufficiently small , this bound significantly
improves previous bounds by Urbanke and Li~[Information Theory Workshop '98]
and Ordentlich and Shayevitz~[2014, arXiv:1412.8415], which upper bound
by and , respectively, as approaches .Comment: 11 pages; to appear at ISIT 201
Sharper Upper Bounds for Unbalanced Uniquely Decodable Code Pairs
Two sets form a Uniquely Decodable Code Pair (UDCP) if every pair , yields a distinct sum , where the addition is over . We show that every UDCP , with and , satisfies . For sufficiently small , this bound significantly improves previous bounds by Urbanke and Li~[Information Theory Workshop '98] and Ordentlich and Shayevitz~[2014, arXiv:1412.8415], which upper bound by and , respectively, as approaches
Zero-error communication over adder MAC
Adder MAC is a simple noiseless multiple-access channel (MAC), where if users
send messages , then the receiver receives with addition over . Communication over the
noiseless adder MAC has been studied for more than fifty years. There are two
models of particular interest: uniquely decodable code tuples, and -codes.
In spite of the similarities between these two models, lower bounds and upper
bounds of the optimal sum rate of uniquely decodable code tuple asymptotically
match as number of users goes to infinity, while there is a gap of factor two
between lower bounds and upper bounds of the optimal rate of -codes.
The best currently known -codes for are constructed using
random coding. In this work, we study variants of the random coding method and
related problems, in hope of achieving -codes with better rate. Our
contribution include the following. (1) We prove that changing the underlying
distribution used in random coding cannot improve the rate. (2) We determine
the rate of a list-decoding version of -codes achieved by the random
coding method. (3) We study several related problems about R\'{e}nyi entropy.Comment: An updated version of author's master thesi
A Note on the Probability of Rectangles for Correlated Binary Strings
Consider two sequences of independent and identically distributed fair
coin tosses, and , which are
-correlated for each , i.e. .
We study the question of how large (small) the probability can be among all sets of a given cardinality.
For sets it is well known that the largest (smallest)
probability is approximately attained by concentric (anti-concentric) Hamming
balls, and this can be proved via the hypercontractive inequality (reverse
hypercontractivity). Here we consider the case of . By
applying a recent extension of the hypercontractive inequality of
Polyanskiy-Samorodnitsky (J. Functional Analysis, 2019), we show that Hamming
balls of the same size approximately maximize in
the regime of . We also prove a similar tight lower bound, i.e.
show that for the pair of opposite Hamming balls approximately
minimizes the probability
Quickest Sequence Phase Detection
A phase detection sequence is a length- cyclic sequence, such that the
location of any length- contiguous subsequence can be determined from a
noisy observation of that subsequence. In this paper, we derive bounds on the
minimal possible in the limit of , and describe some sequence
constructions. We further consider multiple phase detection sequences, where
the location of any length- contiguous subsequence of each sequence can be
determined simultaneously from a noisy mixture of those subsequences. We study
the optimal trade-offs between the lengths of the sequences, and describe some
sequence constructions. We compare these phase detection problems to their
natural channel coding counterparts, and show a strict separation between the
fundamental limits in the multiple sequence case. Both adversarial and
probabilistic noise models are addressed.Comment: To appear in the IEEE Transactions on Information Theor
Faster space-efficient algorithms for Subset Sum, k -Sum, and related problems
We present randomized algorithms that solve subset sum and knapsack instances with n items in O∗ (20.86n) time, where the O∗ (∙ ) notation suppresses factors polynomial in the input size, and polynomial space, assuming random read-only access to exponentially many random bits. These results can be extended to solve binary integer programming on n variables with few constraints in a similar running time. We also show that for any constant k ≥ 2, random instances of k-Sum can be solved using O(nk -0.5polylog(n)) time and O(log n) space, without the assumption of random access to random bits.Underlying these results is an algorithm that determines whether two given lists of length n with integers bounded by a polynomial in n share a common value. Assuming random read-only access to random bits, we show that this problem can be solved using O(log n) space significantly faster than the trivial O(n2) time algorithm if no value occurs too often in the same list.</p
Faster Space-Efficient Algorithms for Subset Sum, k-Sum and Related Problems
We present space efficient Monte Carlo algorithms that solve Subset Sum and Knapsack instances with items using time and polynomial space, where the notation suppresses factors polynomial in the input size. Both algorithms assume random read-only access to random bits. Modulo this mild assumption, this resolves a long-standing open problem in exact algorithms for NP-hard problems. These results can be extended to solve Binary Linear Programming on variables with few constraints in a similar running time. We also show that for any constant , random instances of -Sum can be solved using time and space, without the assumption of random access to random bits. Underlying these results is an algorithm that determines whether two given lists of length with integers bounded by a polynomial in share a common value. Assuming random read-only access to random bits, we show that this problem can be solved using space significantly faster than the trivial time algorithm if no value occurs too often in the same list
Sharper Upper Bounds for Unbalanced Uniquely Decodable Code Pairs
Two sets of 0-1 vectors of fixed length form a uniquely decodeable code pair if their Cartesian product is of the same size as their sumset, where the addition is pointwise over integers. For the size of the sumset of such a pair, van Tilborg has given an upper bound in the general case. Urbanke and Li, and later Ordentlich and Shayevitz, have given better bounds in the unbalanced case, that is, when either of the two sets is sufficiently large. Improvements to the latter bounds are presented.Peer reviewe
Multipermutation Codes in the Ulam Metric for Nonvolatile Memories
We address the problem of multipermutation code
design in the Ulam metric for novel storage applications. Multipermutation codes are suitable for flash memory where cell charges may share the same rank. Changes in the charges of cells manifest themselves as errors whose effects on the retrieved signal may be measured via the Ulam distance. As part of our analysis, we study multipermutation codes in the Hamming metric, known as constant composition codes. We then present bounds on the size of multipermutation codes and their capacity,
for both the Ulam and the Hamming metrics. Finally, we present constructions and accompanying decoders for multipermutation codes in the Ulam metric
Some new developments in image compression
This study is divided into two parts. The first part involves an investigation of near-lossless compression of digitized images using the entropy-coded DPCM method with a large number of quantization levels. Through the investigation, a new scheme that combines both lossy and lossless DPCM methods into a common framework is developed. This new scheme uses known results on the design of predictors and quantizers that incorporate properties of human visual perception. In order to enhance the compression performance of the scheme, an adaptively generated source model with multiple contexts is employed for the coding of the quantized prediction errors, rather than a memoryless model as in the conventional DPCM method. Experiments show that the scheme can provide compression in the range from 4 to 11 with a peak SNR of about 50 dB for 8-bit medical images. Also, the use of multiple contexts is found to improve compression performance by about 25% to 35%;The second part of the study is devoted to the problem of lossy image compression using tree-structured vector quantization. As a result of the study, a new design method for codebook generation is developed together with four different implementation algorithms. In the new method, an unbalanced tree-structured vector codebook is designed in a greedy fashion under the constraint of rate-distortion trade-off which can then be used to implement a variable-rate compression system. From experiments, it is found that the new method can achieve a very good rate-distortion performance while being computationally efficient. Also, due to the tree-structure of the codebook, the new method is amenable to progressive transmission applications