8,871 research outputs found

    Fast arithmetic computing with neural networks

    Get PDF
    The authors introduce a restricted model of a neuron which is more practical as a model of computation then the classical model of a neuron. The authors define a model of neural networks as a feedforward network of such neurons. Whereas any logic circuit of polynomial size (in n) that computes the product of two n-bit numbers requires unbounded delay, such computations can be done in a neural network with constant delay. The authors improve some known results by showing that the product of two n-bit numbers and sorting of n n-bit numbers can both be computed by a polynomial size neural network using only four unit delays, independent of n . Moreover, the weights of each threshold element in the neural networks require only O(log n)-bit (instead of n-bit) accuracy

    Neural computation of arithmetic functions

    Get PDF
    A neuron is modeled as a linear threshold gate, and the network architecture considered is the layered feedforward network. It is shown how common arithmetic functions such as multiplication and sorting can be efficiently computed in a shallow neural network. Some known results are improved by showing that the product of two n-bit numbers and sorting of n n-bit numbers can be computed by a polynomial-size neural network using only four and five unit delays, respectively. Moreover, the weights of each threshold element in the neural networks require O(log n)-bit (instead of n -bit) accuracy. These results can be extended to more complicated functions such as multiple products, division, rational functions, and approximation of analytic functions

    Progress on Polynomial Identity Testing - II

    Full text link
    We survey the area of algebraic complexity theory; with the focus being on the problem of polynomial identity testing (PIT). We discuss the key ideas that have gone into the results of the last few years.Comment: 17 pages, 1 figure, surve

    Variational quantum simulation of general processes

    Full text link
    Variational quantum algorithms have been proposed to solve static and dynamic problems of closed many-body quantum systems. Here we investigate variational quantum simulation of three general types of tasks---generalised time evolution with a non-Hermitian Hamiltonian, linear algebra problems, and open quantum system dynamics. The algorithm for generalised time evolution provides a unified framework for variational quantum simulation. In particular, we show its application in solving linear systems of equations and matrix-vector multiplications by converting these algebraic problems into generalised time evolution. Meanwhile, assuming a tensor product structure of the matrices, we also propose another variational approach for these two tasks by combining variational real and imaginary time evolution. Finally, we introduce variational quantum simulation for open system dynamics. We variationally implement the stochastic Schr\"odinger equation, which consists of dissipative evolution and stochastic jump processes. We numerically test the algorithm with a six-qubit 2D transverse field Ising model under dissipation.Comment: 18 page

    Why and When Can Deep -- but Not Shallow -- Networks Avoid the Curse of Dimensionality: a Review

    Get PDF
    The paper characterizes classes of functions for which deep learning can be exponentially better than shallow learning. Deep convolutional networks are a special case of these conditions, though weight sharing is not the main reason for their exponential advantage
    • …
    corecore