22,956 research outputs found

    Error-correcting codes and neural networks

    No full text

    Neural networks, error-correcting codes, and polynomials over the binary n-cube

    Get PDF
    Several ways of relating the concept of error-correcting codes to the concept of neural networks are presented. Performing maximum-likelihood decoding in a linear block error-correcting code is shown to be equivalent to finding a global maximum of the energy function of a certain neural network. Given a linear block code, a neural network can be constructed in such a way that every codeword corresponds to a local maximum. The connection between maximization of polynomials over the n-cube and error-correcting codes is also investigated; the results suggest that decoding techniques can be a useful tool for solving such maximization problems. The results are generalized to both nonbinary and nonlinear codes

    CodNN -- Robust Neural Networks From Coded Classification

    Get PDF
    Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution, and yet their intrinsic properties remain a mystery. In particular, it is widely known that DNNs are highly sensitive to noise, whether adversarial or random. This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving. In this paper we construct robust DNNs via error correcting codes. By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed. Since DNNs can be seen as a layered concatenation of classification tasks, our research begins with the core task of classifying noisy coded inputs, and progresses towards robust DNNs. We focus on binary data and linear codes. Our main result is that the prevalent parity code can guarantee robustness for a large family of DNNs, which includes the recently popularized binarized neural networks. Further, we show that the coded classification problem has a deep connection to Fourier analysis of Boolean functions. In contrast to existing solutions in the literature, our results do not rely on altering the training process of the DNN, and provide mathematically rigorous guarantees rather than experimental evidence.Comment: To appear in ISIT '2

    A Computational Framework for Efficient Error Correcting Codes Using an Artificial Neural Network Paradigm.

    Get PDF
    The quest for an efficient computational approach to neural connectivity problems has undergone a significant evolution in the last few years. The current best systems are far from equaling human performance, especially when a program of instructions is executed sequentially as in a von Neuman computer. On the other hand, neural net models are potential candidates for parallel processing since they explore many competing hypotheses simultaneously using massively parallel nets composed of many computational elements connected by links with variable weights. Thus, the application of modeling of a neural network must be complemented by deep insight into how to embed algorithms for an error correcting paradigm in order to gain the advantage of parallel computation. In this dissertation, we construct a neural network for single error detection and correction in linear codes. Then we present an error-detecting paradigm in the framework of neural networks. We consider the problem of error detection of systematic unidirectional codes which is assumed to have double or triple errors. The generalization of network construction for the error-detecting codes is discussed with a heuristic algorithm. We also describe models of the code construction, detection and correction of t-EC/d-ED/AUED (t-Error Correcting/d-Error Detecting/All Unidirectional Error Detecting) codes which are more general codes in the error correcting paradigm

    Use of Autoregressive Predictor in Echo State Neural Networks

    Get PDF
    "Echo State" neural networks (ESN), which are a special case of recurrent neural networks, are studied with the goal to achieve their greater predictive ability by the correction of their output signal. In this paper standard ESN was supplemented by a new correcting neural network which has served as an autoregressive predictor. The main task of this special neural network was output signal correction and therefore also a decrease of the prediction error. The goal of this paper was to compare the results achieved by this new approach with those achieved by original one-step learning algorithm. This approach was tested in laser fluctuations and air temperature prediction. Its prediction error decreased substantially in comparison to the standard approach
    corecore