45,454 research outputs found

    Neural networks, error-correcting codes, and polynomials over the binary n-cube

    Get PDF
    Several ways of relating the concept of error-correcting codes to the concept of neural networks are presented. Performing maximum-likelihood decoding in a linear block error-correcting code is shown to be equivalent to finding a global maximum of the energy function of a certain neural network. Given a linear block code, a neural network can be constructed in such a way that every codeword corresponds to a local maximum. The connection between maximization of polynomials over the n-cube and error-correcting codes is also investigated; the results suggest that decoding techniques can be a useful tool for solving such maximization problems. The results are generalized to both nonbinary and nonlinear codes

    A Computational Framework for Efficient Error Correcting Codes Using an Artificial Neural Network Paradigm.

    Get PDF
    The quest for an efficient computational approach to neural connectivity problems has undergone a significant evolution in the last few years. The current best systems are far from equaling human performance, especially when a program of instructions is executed sequentially as in a von Neuman computer. On the other hand, neural net models are potential candidates for parallel processing since they explore many competing hypotheses simultaneously using massively parallel nets composed of many computational elements connected by links with variable weights. Thus, the application of modeling of a neural network must be complemented by deep insight into how to embed algorithms for an error correcting paradigm in order to gain the advantage of parallel computation. In this dissertation, we construct a neural network for single error detection and correction in linear codes. Then we present an error-detecting paradigm in the framework of neural networks. We consider the problem of error detection of systematic unidirectional codes which is assumed to have double or triple errors. The generalization of network construction for the error-detecting codes is discussed with a heuristic algorithm. We also describe models of the code construction, detection and correction of t-EC/d-ED/AUED (t-Error Correcting/d-Error Detecting/All Unidirectional Error Detecting) codes which are more general codes in the error correcting paradigm

    Exploring Quantum Neural Networks for the Discovery and Implementation of Quantum Error-Correcting Codes

    Full text link
    We investigate the use of Quantum Neural Networks for discovering and implementing quantum error-correcting codes. Our research showcases the efficacy of Quantum Neural Networks through the successful implementation of the Bit-Flip quantum error-correcting code using a Quantum Autoencoder, effectively correcting bit-flip errors in arbitrary logical qubit states. Additionally, we employ Quantum Neural Networks to restore states impacted by Amplitude Damping by utilizing an approximative 4-qubit error-correcting codeword. Our models required modification to the initially proposed Quantum Neural Network structure to avoid barren plateaus of the cost function and improve training time. Moreover, we propose a strategy that leverages Quantum Neural Networks to discover new encryption protocols tailored for specific quantum channels. This is exemplified by learning to generate logical qubits explicitly for the bit-flip channel. Our modified Quantum Neural Networks consistently outperformed the standard implementations across all tasks

    Use of Autoregressive Predictor in Echo State Neural Networks

    Get PDF
    "Echo State" neural networks (ESN), which are a special case of recurrent neural networks, are studied with the goal to achieve their greater predictive ability by the correction of their output signal. In this paper standard ESN was supplemented by a new correcting neural network which has served as an autoregressive predictor. The main task of this special neural network was output signal correction and therefore also a decrease of the prediction error. The goal of this paper was to compare the results achieved by this new approach with those achieved by original one-step learning algorithm. This approach was tested in laser fluctuations and air temperature prediction. Its prediction error decreased substantially in comparison to the standard approach

    Nearly extensive sequential memory lifetime achieved by coupled nonlinear neurons

    Full text link
    Many cognitive processes rely on the ability of the brain to hold sequences of events in short-term memory. Recent studies have revealed that such memory can be read out from the transient dynamics of a network of neurons. However, the memory performance of such a network in buffering past information has only been rigorously estimated in networks of linear neurons. When signal gain is kept low, so that neurons operate primarily in the linear part of their response nonlinearity, the memory lifetime is bounded by the square root of the network size. In this work, I demonstrate that it is possible to achieve a memory lifetime almost proportional to the network size, "an extensive memory lifetime", when the nonlinearity of neurons is appropriately utilized. The analysis of neural activity revealed that nonlinear dynamics prevented the accumulation of noise by partially removing noise in each time step. With this error-correcting mechanism, I demonstrate that a memory lifetime of order N/logNN/\log N can be achieved.Comment: 21 pages, 5 figures, the manuscript has been accepted for publication in Neural Computatio
    corecore