2,220 research outputs found

    A new adaptive backpropagation algorithm based on Lyapunov stability theory for neural networks

    Get PDF
    A new adaptive backpropagation (BP) algorithm based on Lyapunov stability theory for neural networks is developed in this paper. It is shown that the candidate of a Lyapunov function V(k) of the tracking error between the output of a neural network and the desired reference signal is chosen first, and the weights of the neural network are then updated, from the output layer to the input layer, in the sense that DeltaV(k)=V(k)-V(k-1)<0. The output tracking error can then asymptotically converge to zero according to Lyapunov stability theory. Unlike gradient-based BP training algorithms, the new Lyapunov adaptive BP algorithm in this paper is not used for searching the global minimum point along the cost-function surface in the weight space, but it is aimed at constructing an energy surface with a single global minimum point through the adaptive adjustment of the weights as the time goes to infinity. Although a neural network may have bounded input disturbances, the effects of the disturbances can be eliminated, and asymptotic error convergence can be obtained. The new Lyapunov adaptive BP algorithm is then applied to the design of an adaptive filter in the simulation example to show the fast error convergence and strong robustness with respect to large bounded input disturbance

    Efficient approaches for escaping higher order saddle points in non-convex optimization

    Get PDF
    Local search heuristics for non-convex optimizations are popular in applied machine learning. However, in general it is hard to guarantee that such algorithms even converge to a local minimum, due to the existence of complicated saddle point structures in high dimensions. Many functions have degenerate saddle points such that the first and second order derivatives cannot distinguish them with local optima. In this paper we use higher order derivatives to escape these saddle points: we design the first efficient algorithm guaranteed to converge to a third order local optimum (while existing techniques are at most second order). We also show that it is NP-hard to extend this further to finding fourth order local optima

    Spontaneous symmetry breaking and the formation of columnar structures in the primary visual cortex II --- Local organization of orientation modules

    Full text link
    Self-organization of orientation-wheels observed in the visual cortex is discussed from the view point of topology. We argue in a generalized model of Kohonen's feature mappings that the existence of the orientation-wheels is a consequence of Riemann-Hurwitz formula from topology. In the same line, we estimate partition function of the model, and show that regardless of the total number N of the orientation-modules per hypercolumn the modules are self-organized, without fine-tuning of parameters, into definite number of orientation-wheels per hypercolumn if N is large.Comment: 36 pages Latex2.09 and eps figures. Needs epsf.sty, amssym.def, and Type1 TeX-fonts of BlueSky Res. for correct typo in graphics file
    corecore