3,366 research outputs found

    A Scaling Law to Predict the Finite-Length Performance of Spatially-Coupled LDPC Codes

    Full text link
    Spatially-coupled LDPC codes are known to have excellent asymptotic properties. Much less is known regarding their finite-length performance. We propose a scaling law to predict the error probability of finite-length spatially-coupled ensembles when transmission takes place over the binary erasure channel. We discuss how the parameters of the scaling law are connected to fundamental quantities appearing in the asymptotic analysis of these ensembles and we verify that the predictions of the scaling law fit well to the data derived from simulations over a wide range of parameters. The ultimate goal of this line of research is to develop analytic tools for the design of spatially-coupled LDPC codes under practical constraints

    On Distributed Storage Allocations for Memory-Limited Systems

    Full text link
    In this paper we consider distributed allocation problems with memory constraint limits. Firstly, we propose a tractable relaxation to the problem of optimal symmetric allocations from [1]. The approximated problem is based on the Q-error function, and its solution approaches the solution of the initial problem, as the number of storage nodes in the network grows. Secondly, exploiting this relaxation, we are able to formulate and to solve the problem for storage allocations for memory-limited DSS storing and arbitrary memory profiles. Finally, we discuss the extension to the case of multiple data objects, stored in the DSS.Comment: Submitted to IEEE GLOBECOM'1

    Nonlinear Stability of Riemann Ellipsoids with Symmetric Configurations

    Get PDF
    We apply geometric techniques to obtain the necessary and sufficient conditions on the existence and nonlinear stability of self-gravitating Riemann ellipsoids having at least two equal axes

    Tree-structure Expectation Propagation for Decoding LDPC codes over Binary Erasure Channels

    Full text link
    Expectation Propagation is a generalization to Belief Propagation (BP) in two ways. First, it can be used with any exponential family distribution over the cliques in the graph. Second, it can impose additional constraints on the marginal distributions. We use this second property to impose pair-wise marginal distribution constraints in some check nodes of the LDPC Tanner graph. These additional constraints allow decoding the received codeword when the BP decoder gets stuck. In this paper, we first present the new decoding algorithm, whose complexity is identical to the BP decoder, and we then prove that it is able to decode codewords with a larger fraction of erasures, as the block size tends to infinity. The proposed algorithm can be also understood as a simplification of the Maxwell decoder, but without its computational complexity. We also illustrate that the new algorithm outperforms the BP decoder for finite block-siz

    Boosting Handwriting Text Recognition in Small Databases with Transfer Learning

    Full text link
    In this paper we deal with the offline handwriting text recognition (HTR) problem with reduced training datasets. Recent HTR solutions based on artificial neural networks exhibit remarkable solutions in referenced databases. These deep learning neural networks are composed of both convolutional (CNN) and long short-term memory recurrent units (LSTM). In addition, connectionist temporal classification (CTC) is the key to avoid segmentation at character level, greatly facilitating the labeling task. One of the main drawbacks of the CNNLSTM-CTC (CLC) solutions is that they need a considerable part of the text to be transcribed for every type of calligraphy, typically in the order of a few thousands of lines. Furthermore, in some scenarios the text to transcribe is not that long, e.g. in the Washington database. The CLC typically overfits for this reduced number of training samples. Our proposal is based on the transfer learning (TL) from the parameters learned with a bigger database. We first investigate, for a reduced and fixed number of training samples, 350 lines, how the learning from a large database, the IAM, can be transferred to the learning of the CLC of a reduced database, Washington. We focus on which layers of the network could be not re-trained. We conclude that the best solution is to re-train the whole CLC parameters initialized to the values obtained after the training of the CLC from the larger database. We also investigate results when the training size is further reduced. The differences in the CER are more remarkable when training with just 350 lines, a CER of 3.3% is achieved with TL while we have a CER of 18.2% when training from scratch. As a byproduct, the learning times are quite reduced. Similar good results are obtained from the Parzival database when trained with this reduced number of lines and this new approach.Comment: ICFHR 2018 Conferenc
    • …
    corecore