2,017 research outputs found

    Redundancy Allocation of Partitioned Linear Block Codes

    Full text link
    Most memories suffer from both permanent defects and intermittent random errors. The partitioned linear block codes (PLBC) were proposed by Heegard to efficiently mask stuck-at defects and correct random errors. The PLBC have two separate redundancy parts for defects and random errors. In this paper, we investigate the allocation of redundancy between these two parts. The optimal redundancy allocation will be investigated using simulations and the simulation results show that the PLBC can significantly reduce the probability of decoding failure in memory with defects. In addition, we will derive the upper bound on the probability of decoding failure of PLBC and estimate the optimal redundancy allocation using this upper bound. The estimated redundancy allocation matches the optimal redundancy allocation well.Comment: 5 pages, 2 figures, to appear in IEEE International Symposium on Information Theory (ISIT), Jul. 201

    Coding scheme for 3D vertical flash memory

    Full text link
    Recently introduced 3D vertical flash memory is expected to be a disruptive technology since it overcomes scaling challenges of conventional 2D planar flash memory by stacking up cells in the vertical direction. However, 3D vertical flash memory suffers from a new problem known as fast detrapping, which is a rapid charge loss problem. In this paper, we propose a scheme to compensate the effect of fast detrapping by intentional inter-cell interference (ICI). In order to properly control the intentional ICI, our scheme relies on a coding technique that incorporates the side information of fast detrapping during the encoding stage. This technique is closely connected to the well-known problem of coding in a memory with defective cells. Numerical results show that the proposed scheme can effectively address the problem of fast detrapping.Comment: 7 pages, 9 figures. accepted to ICC 2015. arXiv admin note: text overlap with arXiv:1410.177

    Herding as a Learning System with Edge-of-Chaos Dynamics

    Full text link
    Herding defines a deterministic dynamical system at the edge of chaos. It generates a sequence of model states and parameters by alternating parameter perturbations with state maximizations, where the sequence of states can be interpreted as "samples" from an associated MRF model. Herding differs from maximum likelihood estimation in that the sequence of parameters does not converge to a fixed point and differs from an MCMC posterior sampling approach in that the sequence of states is generated deterministically. Herding may be interpreted as a"perturb and map" method where the parameter perturbations are generated using a deterministic nonlinear dynamical system rather than randomly from a Gumbel distribution. This chapter studies the distinct statistical characteristics of the herding algorithm and shows that the fast convergence rate of the controlled moments may be attributed to edge of chaos dynamics. The herding algorithm can also be generalized to models with latent variables and to a discriminative learning setting. The perceptron cycling theorem ensures that the fast moment matching property is preserved in the more general framework

    Scalable Neural Network Decoders for Higher Dimensional Quantum Codes

    Get PDF
    Machine learning has the potential to become an important tool in quantum error correction as it allows the decoder to adapt to the error distribution of a quantum chip. An additional motivation for using neural networks is the fact that they can be evaluated by dedicated hardware which is very fast and consumes little power. Machine learning has been previously applied to decode the surface code. However, these approaches are not scalable as the training has to be redone for every system size which becomes increasingly difficult. In this work the existence of local decoders for higher dimensional codes leads us to use a low-depth convolutional neural network to locally assign a likelihood of error on each qubit. For noiseless syndrome measurements, numerical simulations show that the decoder has a threshold of around 7.1%7.1\% when applied to the 4D toric code. When the syndrome measurements are noisy, the decoder performs better for larger code sizes when the error probability is low. We also give theoretical and numerical analysis to show how a convolutional neural network is different from the 1-nearest neighbor algorithm, which is a baseline machine learning method

    Joint Source-channel Coding Using Machine Learning Techniques

    Get PDF
    Most modern communication systems rely on separate source encoding and channel encoding schemes to transmit data. Despite the long-lasting success of separate schemes, joint source channel coding schemes have been proven to outperform separate schemes in applications such as video communications. The task of this research is to develop a joint source-channel coding scheme that mitigates some of the limitations of current separate coding schemes. My research will attempt to leverage recent advances in machine/deep learning techniques to develop resilient schemes that do not depend on explicit codes for compression and error correction but automatically learn end-to-end mapping schemes for source signals. The success of the developed scheme will depend on its ability to correctly approximate an input vector under inconsistent channel conditions
    • …
    corecore