20,243 research outputs found

    Improved Successive Cancellation Flip Decoding of Polar Codes Based on Error Distribution

    Full text link
    Polar codes are a class of linear block codes that provably achieves channel capacity, and have been selected as a coding scheme for 5th5^{\rm th} generation wireless communication standards. Successive-cancellation (SC) decoding of polar codes has mediocre error-correction performance on short to moderate codeword lengths: the SC-Flip decoding algorithm is one of the solutions that have been proposed to overcome this issue. On the other hand, SC-Flip has a higher implementation complexity compared to SC due to the required log-likelihood ratio (LLR) selection and sorting process. Moreover, it requires a high number of iterations to reach good error-correction performance. In this work, we propose two techniques to improve the SC-Flip decoding algorithm for low-rate codes, based on the observation of channel-induced error distributions. The first one is a fixed index selection (FIS) scheme to avoid the substantial implementation cost of LLR selection and sorting with no cost on error-correction performance. The second is an enhanced index selection (EIS) criterion to improve the error-correction performance of SC-Flip decoding. A reduction of 24.6%24.6\% in the implementation cost of logic elements is estimated with the FIS approach, while simulation results show that EIS leads to an improvement on error-correction performance improvement up to 0.420.42 dB at a target FER of 10−410^{-4}.Comment: This version of the manuscript corrects an error in the previous ArXiv version, as well as the published version in IEEE Xplore under the same title, which has the DOI:10.1109/WCNCW.2018.8368991. The corrections include all the simulations of SC-Flip-based and SC-Oracle decoders, along with associated comments in-tex

    On the role of synaptic stochasticity in training low-precision neural networks

    Get PDF
    Stochasticity and limited precision of synaptic weights in neural network models are key aspects of both biological and hardware modeling of learning processes. Here we show that a neural network model with stochastic binary weights naturally gives prominence to exponentially rare dense regions of solutions with a number of desirable properties such as robustness and good generalization performance, while typical solutions are isolated and hard to find. Binary solutions of the standard perceptron problem are obtained from a simple gradient descent procedure on a set of real values parametrizing a probability distribution over the binary synapses. Both analytical and numerical results are presented. An algorithmic extension aimed at training discrete deep neural networks is also investigated.Comment: 7 pages + 14 pages of supplementary materia

    The chemistry of comets An annotated bibliography

    Get PDF
    Annotated bibliography on chemistry of comets - free radicals, photochemistry, photolysis, and spectral analysi

    Finite-size scaling and deconfinement transition: the case of 4D SU(2) pure gauge theory

    Get PDF
    A recently introduced method for determining the critical indices of the deconfinement transition in gauge theories, already tested for the case of 3D SU(3) pure gauge theory, is applied here to 4D SU(2) pure gauge theory. The method is inspired by universality and based on the finite size scaling behavior of the expectation value of simple lattice operators, such as the plaquette. We obtain an accurate determination of the critical index ν\nu, in agreement with the prediction of the Svetitsky-Yaffe conjecture.Comment: 11 pages, 3 eps figure
    • …
    corecore