8,161 research outputs found

    The Computational Power of Neural Networks and Representations of Numbers in Non-Integer Bases

    Get PDF
    We briefly survey the basic concepts and results concerning the computational power of neural net-orks which basically depends on the information content of eight parameters. In particular, recurrent neural networks with integer, rational, and arbitrary real weights are classi ed within the Chomsky and finer complexity hierarchies. Then we re ne the analysis between integer and rational weights by investigating an intermediate model of integer-weight neural networks with an extra analog rational-weight neuron (1ANN). We show a representation theorem which characterizes the classification problems solvable by 1ANNs, by using so-called cut languages. Our analysis reveals an interesting link to an active research field on non-standard positional numeral systems with non-integer bases. Within this framework, we introduce a new concept of quasi-periodic numbers which is used to classify the computational power of 1ANNs within the Chomsky hierarchy

    Discontinuities in recurrent neural networks

    Get PDF
    This paper studies the computational power of various discontinuous real computational models that are based on the classical analog recurrent neural network (ARNN). This ARNN consists of finite number of neurons; each neuron computes a polynomial net-function and a sigmoid-like continuous activation-function. The authors introducePostprint (published version

    Neural computation of arithmetic functions

    Get PDF
    A neuron is modeled as a linear threshold gate, and the network architecture considered is the layered feedforward network. It is shown how common arithmetic functions such as multiplication and sorting can be efficiently computed in a shallow neural network. Some known results are improved by showing that the product of two n-bit numbers and sorting of n n-bit numbers can be computed by a polynomial-size neural network using only four and five unit delays, respectively. Moreover, the weights of each threshold element in the neural networks require O(log n)-bit (instead of n -bit) accuracy. These results can be extended to more complicated functions such as multiple products, division, rational functions, and approximation of analytic functions

    Prediction of the Atomization Energy of Molecules Using Coulomb Matrix and Atomic Composition in a Bayesian Regularized Neural Networks

    Full text link
    Exact calculation of electronic properties of molecules is a fundamental step for intelligent and rational compounds and materials design. The intrinsically graph-like and non-vectorial nature of molecular data generates a unique and challenging machine learning problem. In this paper we embrace a learning from scratch approach where the quantum mechanical electronic properties of molecules are predicted directly from the raw molecular geometry, similar to some recent works. But, unlike these previous endeavors, our study suggests a benefit from combining molecular geometry embedded in the Coulomb matrix with the atomic composition of molecules. Using the new combined features in a Bayesian regularized neural networks, our results improve well-known results from the literature on the QM7 dataset from a mean absolute error of 3.51 kcal/mol down to 3.0 kcal/mol.Comment: Under review ICANN 201
    • …
    corecore