1,130 research outputs found

    On Pseudocodewords and Improved Union Bound of Linear Programming Decoding of HDPC Codes

    Full text link
    In this paper, we present an improved union bound on the Linear Programming (LP) decoding performance of the binary linear codes transmitted over an additive white Gaussian noise channels. The bounding technique is based on the second-order of Bonferroni-type inequality in probability theory, and it is minimized by Prim's minimum spanning tree algorithm. The bound calculation needs the fundamental cone generators of a given parity-check matrix rather than only their weight spectrum, but involves relatively low computational complexity. It is targeted to high-density parity-check codes, where the number of their generators is extremely large and these generators are spread densely in the Euclidean space. We explore the generator density and make a comparison between different parity-check matrix representations. That density effects on the improvement of the proposed bound over the conventional LP union bound. The paper also presents a complete pseudo-weight distribution of the fundamental cone generators for the BCH[31,21,5] code

    Permutation Decoding and the Stopping Redundancy Hierarchy of Cyclic and Extended Cyclic Codes

    Full text link
    We introduce the notion of the stopping redundancy hierarchy of a linear block code as a measure of the trade-off between performance and complexity of iterative decoding for the binary erasure channel. We derive lower and upper bounds for the stopping redundancy hierarchy via Lovasz's Local Lemma and Bonferroni-type inequalities, and specialize them for codes with cyclic parity-check matrices. Based on the observed properties of parity-check matrices with good stopping redundancy characteristics, we develop a novel decoding technique, termed automorphism group decoding, that combines iterative message passing and permutation decoding. We also present bounds on the smallest number of permutations of an automorphism group decoder needed to correct any set of erasures up to a prescribed size. Simulation results demonstrate that for a large number of algebraic codes, the performance of the new decoding method is close to that of maximum likelihood decoding.Comment: 40 pages, 6 figures, 10 tables, submitted to IEEE Transactions on Information Theor

    Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control

    Full text link
    We introduce a framework for calibrating machine learning models so that their predictions satisfy explicit, finite-sample statistical guarantees. Our calibration algorithm works with any underlying model and (unknown) data-generating distribution and does not require model refitting. The framework addresses, among other examples, false discovery rate control in multi-label classification, intersection-over-union control in instance segmentation, and the simultaneous control of the type-1 error of outlier detection and confidence set coverage in classification or regression. Our main insight is to reframe the risk-control problem as multiple hypothesis testing, enabling techniques and mathematical arguments different from those in the previous literature. We use our framework to provide new calibration methods for several core machine learning tasks with detailed worked examples in computer vision and tabular medical data.Comment: Code available at https://github.com/aangelopoulos/lt

    The correlation space of Gaussian latent tree models and model selection without fitting

    Get PDF
    We provide a complete description of possible covariance matrices consistent with a Gaussian latent tree model for any tree. We then present techniques for utilising these constraints to assess whether observed data is compatible with that Gaussian latent tree model. Our method does not require us first to fit such a tree. We demonstrate the usefulness of the inverse-Wishart distribution for performing preliminary assessments of tree-compatibility using semialgebraic constraints. Using results from Drton et al. (2008) we then provide the appropriate moments required for test statistics for assessing adherence to these equality constraints. These are shown to be effective even for small sample sizes and can be easily adjusted to test either the entire model or only certain macrostructures hypothesized within the tree. We illustrate our exploratory tetrad analysis using a linguistic application and our confirmatory tetrad analysis using a biological application.Comment: 15 page

    Target Enumeration via Euler Characteristic Integrals

    Get PDF
    We solve the problem of counting the total number of observable targets (e.g., persons, vehicles, landmarks) in a region using local counts performed by a network of sensors, each of which measures the number of targets nearby but neither their identities nor any positional information. We formulate and solve several such problems based on the types of sensors and mobility of the targets. The main contribution of this paper is the adaptation of a topological sheaf integration theory — integration with respect to Euler characteristic — to yield complete solutions to these problems
    • …
    corecore