75,518 research outputs found

    Recurrent backpropagation and the dynamical approach to adaptive neural computation

    Get PDF
    Error backpropagation in feedforward neural network models is a popular learning algorithm that has its roots in nonlinear estimation and optimization. It is being used routinely to calculate error gradients in nonlinear systems with hundreds of thousands of parameters. However, the classical architecture for backpropagation has severe restrictions. The extension of backpropagation to networks with recurrent connections will be reviewed. It is now possible to efficiently compute the error gradients for networks that have temporal dynamics, which opens applications to a host of problems in systems identification and control

    Face Recognition Using Complex Valued Backpropagation

    Full text link
    Face recognition is one of biometrical research area that is still interesting. This study discusses the Complex-Valued Backpropagation algorithm for face recognition. Complex-Valued Backpropagation is an algorithm modified from Real-Valued Backpropagation algorithm where the weights and activation functions used are complex. The dataset used in this study consist of 250 images that is classified in 5 classes. The performance of face recognition using Complex-Valued Backpropagation is also compared with Real-Valued Backpropagation algorithm. Experimental results have shown that Complex-Valued Backpropagation performance is better than Real-Valued Backpropagation

    Backpropagation training in adaptive quantum networks

    Full text link
    We introduce a robust, error-tolerant adaptive training algorithm for generalized learning paradigms in high-dimensional superposed quantum networks, or \emph{adaptive quantum networks}. The formalized procedure applies standard backpropagation training across a coherent ensemble of discrete topological configurations of individual neural networks, each of which is formally merged into appropriate linear superposition within a predefined, decoherence-free subspace. Quantum parallelism facilitates simultaneous training and revision of the system within this coherent state space, resulting in accelerated convergence to a stable network attractor under consequent iteration of the implemented backpropagation algorithm. Parallel evolution of linear superposed networks incorporating backpropagation training provides quantitative, numerical indications for optimization of both single-neuron activation functions and optimal reconfiguration of whole-network quantum structure.Comment: Talk presented at "Quantum Structures - 2008", Gdansk, Polan

    JET ANALYSIS BY NEURAL NETWORKS IN HIGH ENERGY HADRON-HADRON COLLISIONS

    Full text link
    We study the possibility to employ neural networks to simulate jet clustering procedures in high energy hadron-hadron collisions. We concentrate our analysis on the Fermilab Tevatron energy and on the kk_\bot algorithm. We consider both supervised multilayer feed-forward network trained by the backpropagation algorithm and unsupervised learning, where the neural network autonomously organizes the events in clusters.Comment: 9 pages, latex, 2 figures not included
    corecore