9,526 research outputs found

    LP-decodable multipermutation codes

    Full text link
    In this paper, we introduce a new way of constructing and decoding multipermutation codes. Multipermutations are permutations of a multiset that may consist of duplicate entries. We first introduce a new class of matrices called multipermutation matrices. We characterize the convex hull of multipermutation matrices. Based on this characterization, we propose a new class of codes that we term LP-decodable multipermutation codes. Then, we derive two LP decoding algorithms. We first formulate an LP decoding problem for memoryless channels. We then derive an LP algorithm that minimizes the Chebyshev distance. Finally, we show a numerical example of our algorithm.Comment: This work was supported by NSF and NSERC. To appear at the 2014 Allerton Conferenc

    Hardware Based Projection onto The Parity Polytope and Probability Simplex

    Full text link
    This paper is concerned with the adaptation to hardware of methods for Euclidean norm projections onto the parity polytope and probability simplex. We first refine recent efforts to develop efficient methods of projection onto the parity polytope. Our resulting algorithm can be configured to have either average computational complexity O(d)\mathcal{O}\left(d\right) or worst case complexity O(dlogd)\mathcal{O}\left(d\log{d}\right) on a serial processor where dd is the dimension of projection space. We show how to adapt our projection routine to hardware. Our projection method uses a sub-routine that involves another Euclidean projection; onto the probability simplex. We therefore explain how to adapt to hardware a well know simplex projection algorithm. The hardware implementations of both projection algorithms achieve area scalings of O(d(logd)2)\mathcal{O}(d\left(\log{d}\right)^2) at a delay of O((logd)2)\mathcal{O}(\left(\log{d}\right)^2). Finally, we present numerical results in which we evaluate the fixed-point accuracy and resource scaling of these algorithms when targeting a modern FPGA

    Pathological classification of equine recurrent laryngeal neuropathy

    Get PDF
    Recurrent Laryngeal Neuropathy (RLN) is a highly prevalent and predominantly left‐sided, degenerative disorder of the recurrent laryngeal nerves (RLn) of tall horses, that causes inspiratory stridor at exercise because of intrinsic laryngeal muscle paresis. The associated laryngeal dysfunction and exercise intolerance in athletic horses commonly leads to surgical intervention, retirement or euthanasia with associated financial and welfare implications. Despite speculation, there is a lack of consensus and conflicting evidence supporting the primary classification of RLN, as either a distal (“dying back”) axonopathy or as a primary myelinopathy and as either a (bilateral) mononeuropathy or a polyneuropathy; this uncertainty hinders etiological and pathophysiological research. In this review, we discuss the neuropathological changes and electrophysiological deficits reported in the RLn of affected horses, and the evidence for correct classification of the disorder. In so doing, we summarize and reveal the limitations of much historical research on RLN and propose future directions that might best help identify the etiology and pathophysiology of this enigmatic disorder

    Efficient learning of neighbor representations for boundary trees and forests

    Full text link
    We introduce a semiparametric approach to neighbor-based classification. We build off the recently proposed Boundary Trees algorithm by Mathy et al.(2015) which enables fast neighbor-based classification, regression and retrieval in large datasets. While boundary trees use an Euclidean measure of similarity, the Differentiable Boundary Tree algorithm by Zoran et al.(2017) was introduced to learn low-dimensional representations of complex input data, on which semantic similarity can be calculated to train boundary trees. As is pointed out by its authors, the differentiable boundary tree approach contains a few limitations that prevents it from scaling to large datasets. In this paper, we introduce Differentiable Boundary Sets, an algorithm that overcomes the computational issues of the differentiable boundary tree scheme and also improves its classification accuracy and data representability. Our algorithm is efficiently implementable with existing tools and offers a significant reduction in training time. We test and compare the algorithms on the well known MNIST handwritten digits dataset and the newer Fashion-MNIST dataset by Xiao et al.(2017).Comment: 9 pages, 2 figure

    Decomposition Methods for Large Scale LP Decoding

    Full text link
    When binary linear error-correcting codes are used over symmetric channels, a relaxed version of the maximum likelihood decoding problem can be stated as a linear program (LP). This LP decoder can be used to decode error-correcting codes at bit-error-rates comparable to state-of-the-art belief propagation (BP) decoders, but with significantly stronger theoretical guarantees. However, LP decoding when implemented with standard LP solvers does not easily scale to the block lengths of modern error correcting codes. In this paper we draw on decomposition methods from optimization theory, specifically the Alternating Directions Method of Multipliers (ADMM), to develop efficient distributed algorithms for LP decoding. The key enabling technical result is a "two-slice" characterization of the geometry of the parity polytope, which is the convex hull of all codewords of a single parity check code. This new characterization simplifies the representation of points in the polytope. Using this simplification, we develop an efficient algorithm for Euclidean norm projection onto the parity polytope. This projection is required by ADMM and allows us to use LP decoding, with all its theoretical guarantees, to decode large-scale error correcting codes efficiently. We present numerical results for LDPC codes of lengths more than 1000. The waterfall region of LP decoding is seen to initiate at a slightly higher signal-to-noise ratio than for sum-product BP, however an error floor is not observed for LP decoding, which is not the case for BP. Our implementation of LP decoding using ADMM executes as fast as our baseline sum-product BP decoder, is fully parallelizable, and can be seen to implement a type of message-passing with a particularly simple schedule.Comment: 35 pages, 11 figures. An early version of this work appeared at the 49th Annual Allerton Conference, September 2011. This version to appear in IEEE Transactions on Information Theor

    The Sender-Excited Secret Key Agreement Model: Capacity, Reliability and Secrecy Exponents

    Full text link
    We consider the secret key generation problem when sources are randomly excited by the sender and there is a noiseless public discussion channel. Our setting is thus similar to recent works on channels with action-dependent states where the channel state may be influenced by some of the parties involved. We derive single-letter expressions for the secret key capacity through a type of source emulation analysis. We also derive lower bounds on the achievable reliability and secrecy exponents, i.e., the exponential rates of decay of the probability of decoding error and of the information leakage. These exponents allow us to determine a set of strongly-achievable secret key rates. For degraded eavesdroppers the maximum strongly-achievable rate equals the secret key capacity; our exponents can also be specialized to previously known results. In deriving our strong achievability results we introduce a coding scheme that combines wiretap coding (to excite the channel) and key extraction (to distill keys from residual randomness). The secret key capacity is naturally seen to be a combination of both source- and channel-type randomness. Through examples we illustrate a fundamental interplay between the portion of the secret key rate due to each type of randomness. We also illustrate inherent tradeoffs between the achievable reliability and secrecy exponents. Our new scheme also naturally accommodates rate limits on the public discussion. We show that under rate constraints we are able to achieve larger rates than those that can be attained through a pure source emulation strategy.Comment: 18 pages, 8 figures; Submitted to the IEEE Transactions on Information Theory; Revised in Oct 201
    corecore