3 research outputs found

    A Metric for Evaluating Neural Input Representation in Supervised Learning Networks

    Get PDF
    Supervised learning has long been attributed to several feed-forward neural circuits within the brain, with particular attention being paid to the cerebellar granular layer. The focus of this study is to evaluate the input activity representation of these feed-forward neural networks. The activity of cerebellar granule cells is conveyed by parallel fibers and translated into Purkinje cell activity, which constitutes the sole output of the cerebellar cortex. The learning process at this parallel-fiber-to-Purkinje-cell connection makes each Purkinje cell sensitive to a set of specific cerebellar states, which are roughly determined by the granule-cell activity during a certain time window. A Purkinje cell becomes sensitive to each neural input state and, consequently, the network operates as a function able to generate a desired output for each provided input by means of supervised learning. However, not all sets of Purkinje cell responses can be assigned to any set of input states due to the network's own limitations (inherent to the network neurobiological substrate), that is, not all input-output mapping can be learned. A key limiting factor is the representation of the input states through granule-cell activity. The quality of this representation (e.g., in terms of heterogeneity) will determine the capacity of the network to learn a varied set of outputs. Assessing the quality of this representation is interesting when developing and studying models of these networks to identify those neuron or network characteristics that enhance this representation. In this study we present an algorithm for evaluating quantitatively the level of compatibility/interference amongst a set of given cerebellar states according to their representation (granule-cell activation patterns) without the need for actually conducting simulations and network training. The algorithm input consists of a real-number matrix that codifies the activity level of every considered granule-cell in each state. The capability of this representation to generate a varied set of outputs is evaluated geometrically, thus resulting in a real number that assesses the goodness of the representation

    Active disturbance cancellation in nonlinear dynamical systems using neural networks

    Get PDF
    A proposal for the use of a time delay CMAC neural network for disturbance cancellation in nonlinear dynamical systems is presented. Appropriate modifications to the CMAC training algorithm are derived which allow convergent adaptation for a variety of secondary signal paths. Analytical bounds on the maximum learning gain are presented which guarantee convergence of the algorithm and provide insight into the necessary reduction in learning gain as a function of the system parameters. Effectiveness of the algorithm is evaluated through mathematical analysis, simulation studies, and experimental application of the technique on an acoustic duct laboratory model

    A wavelet-based CMAC for enhanced multidimensional learning

    Get PDF
    The CMAC (Cerebellar Model Articulation Controller) neural network has been successfully used in control systems and other applications for many years. The network structure is modular and associative, allowing for rapid learning convergence with an ease of implementation in either hardware or software. The rate of convergence of the network is determined largely by the choice of the receptive field shape and the generalization parameter. This research contains a rigorous analysis of the rate of convergence with the standard CMAC, as well as the rate of convergence of networks using other receptive field shape. The effects of decimation from state-space to weight space are examined in detail. This analysis shows CMAC to be an adaptive lowpass filter, where the filter dynamics are governed by the generalization parameter. A more general CMAC is derived using wavelet-based receptive fields and a controllable decimation scheme, that is capable of convergence at any frequency within the Nyquist limits. The flexible decimation structure facilitates the optimization of computation for complex multidimensional problems. The stability of the wavelet-based CMAC is also examined
    corecore