5 research outputs found

    On the bit error rate of repeated error-correcting codes

    Get PDF
    Classically, error-correcting codes are studied with respect to performance metrics such as minimum distance (combinatorial) or probability of bit/block error over a given stochastic channel. In this paper, a different metric is considered. It is assumed that the block code is used to repeatedly encode user data. The resulting stream is subject to adversarial noise of given power, and the decoder is required to reproduce the data with minimal possible bit-error rate. This setup may be viewed as a combinatorial joint source-channel coding. Two basic results are shown for the achievable noise-distortion tradeoff: the optimal performance for decoders that are informed of the noise power, and global bounds for decoders operating in complete oblivion (with respect to noise level). General results are applied to the Hamming [7, 4, 3] code, for which it is demonstrated (among other things) that no oblivious decoder exist that attains optimality for all noise levels simultaneously.National Science Foundation (U.S.) (Grant CCF-13-18620

    On the String Consensus Problem and the Manhattan Sequence Consensus Problem

    Full text link
    In the Manhattan Sequence Consensus problem (MSC problem) we are given kk integer sequences, each of length ll, and we are to find an integer sequence xx of length ll (called a consensus sequence), such that the maximum Manhattan distance of xx from each of the input sequences is minimized. For binary sequences Manhattan distance coincides with Hamming distance, hence in this case the string consensus problem (also called string center problem or closest string problem) is a special case of MSC. Our main result is a practically efficient O(l)O(l)-time algorithm solving MSC for k≤5k\le 5 sequences. Practicality of our algorithms has been verified experimentally. It improves upon the quadratic algorithm by Amir et al.\ (SPIRE 2012) for string consensus problem for k=5k=5 binary strings. Similarly as in Amir's algorithm we use a column-based framework. We replace the implied general integer linear programming by its easy special cases, due to combinatorial properties of the MSC for k≤5k\le 5. We also show that for a general parameter kk any instance can be reduced in linear time to a kernel of size k!k!, so the problem is fixed-parameter tractable. Nevertheless, for k≥4k\ge 4 this is still too large for any naive solution to be feasible in practice.Comment: accepted to SPIRE 201

    On adversarial joint source channel coding

    Full text link
    Abstract—In a joint source-channel coding scheme, a single mapping is used to perform both the tasks of data compression and channel coding in a combined way, rather than performing them separately. Usually for simple iid sources and channels, separation of the tasks is information theoretically optimal. In an adversarial joint source-channel coding scenario, instead of a stochastic channel, an adversary introduces a set of bounded number of errors/erasures. It has been shown recently that, even in the simplest cases of such adversarial models, separation is suboptimal and characterizing the fundamental limits is difficult. In this paper, we study several properties of such adversarial joint source-channel schemes. We show optimality of separation in some situations, provide simple joint schemes that beat separation in others, and give new bounds on the rate of such coding. I

    Hilbert geometry of the Siegel disk: The Siegel-Klein disk model

    Full text link
    We study the Hilbert geometry induced by the Siegel disk domain, an open bounded convex set of complex square matrices of operator norm strictly less than one. This Hilbert geometry yields a generalization of the Klein disk model of hyperbolic geometry, henceforth called the Siegel-Klein disk model to differentiate it with the classical Siegel upper plane and disk domains. In the Siegel-Klein disk, geodesics are by construction always unique and Euclidean straight, allowing one to design efficient geometric algorithms and data-structures from computational geometry. For example, we show how to approximate the smallest enclosing ball of a set of complex square matrices in the Siegel disk domains: We compare two generalizations of the iterative core-set algorithm of Badoiu and Clarkson (BC) in the Siegel-Poincar\'e disk and in the Siegel-Klein disk: We demonstrate that geometric computing in the Siegel-Klein disk allows one (i) to bypass the time-costly recentering operations to the disk origin required at each iteration of the BC algorithm in the Siegel-Poincar\'e disk model, and (ii) to approximate fast and numerically the Siegel-Klein distance with guaranteed lower and upper bounds derived from nested Hilbert geometries.Comment: 42 pages, 7 figure

    On Chebyshev Radius of a Set in Hamming Space and the Closest String Problem

    Get PDF
    Abstract—The Chebyshev radius of a set in a metric space is defined to be the radius of the smallest ball containing the set. This quantity is closely related to the covering radius of the set and, in particular for Hamming set, is extensively studied in computational biology. This paper investigates some basic properties of radii of sets in n-dimensional Hamming space, provides a linear programing relaxation and gives tight bounds on the integrality gap. This results in a simple polynomial-time approximation algorithm that attains the performance of the best known such algorithms with shorter running time. I
    corecore