34 research outputs found

    The combination of circle topology and leaky integrator neurons remarkably improves the performance of echo state network on time series prediction.

    No full text
    Recently, echo state network (ESN) has attracted a great deal of attention due to its high accuracy and efficient learning performance. Compared with the traditional random structure and classical sigmoid units, simple circle topology and leaky integrator neurons have more advantages on reservoir computing of ESN. In this paper, we propose a new model of ESN with both circle reservoir structure and leaky integrator units. By comparing the prediction capability on Mackey-Glass chaotic time series of four ESN models: classical ESN, circle ESN, traditional leaky integrator ESN, circle leaky integrator ESN, we find that our circle leaky integrator ESN shows significantly better performance than other ESNs with roughly 2 orders of magnitude reduction of the predictive error. Moreover, this model has stronger ability to approximate nonlinear dynamics and resist noise than conventional ESN and ESN with only simple circle structure or leaky integrator neurons. Our results show that the combination of circle topology and leaky integrator neurons can remarkably increase dynamical diversity and meanwhile decrease the correlation of reservoir states, which contribute to the significant improvement of computational performance of Echo state network on time series prediction

    A priori data-driven multi-clustered reservoir generation algorithm for echo state network.

    No full text
    Echo state networks (ESNs) with multi-clustered reservoir topology perform better in reservoir computing and robustness than those with random reservoir topology. However, these ESNs have a complex reservoir topology, which leads to difficulties in reservoir generation. This study focuses on the reservoir generation problem when ESN is used in environments with sufficient priori data available. Accordingly, a priori data-driven multi-cluster reservoir generation algorithm is proposed. The priori data in the proposed algorithm are used to evaluate reservoirs by calculating the precision and standard deviation of ESNs. The reservoirs are produced using the clustering method; only the reservoir with a better evaluation performance takes the place of a previous one. The final reservoir is obtained when its evaluation score reaches the preset requirement. The prediction experiment results obtained using the Mackey-Glass chaotic time series show that the proposed reservoir generation algorithm provides ESNs with extra prediction precision and increases the structure complexity of the network. Further experiments also reveal the appropriate values of the number of clusters and time window size to obtain optimal performance. The information entropy of the reservoir reaches the maximum when ESN gains the greatest precision

    FPGA Implementation of Self-Organized Spiking Neural Network Controller for Mobile Robots

    No full text
    Spiking neural network, a computational model which uses spikes to process the information, is good candidate for mobile robot controller. In this paper, we present a novel mechanism for controlling mobile robots based on self-organized spiking neural network (SOSNN) and introduce a method for FPGA implementation of this SOSNN. The spiking neuron we used is Izhikevich model. A key feature of this controller is that it can simulate the process of unconditioned reflex (avoid obstacles using infrared sensor signals) and conditioned reflex (make right choices in multiple T-maze) by spike timing-dependent plasticity (STDP) learning and dopamine-receptor modulation. Experimental results show that the proposed controller is effective and is easy to implement. The FPGA implementation method aims to build up a specific network using generic blocks designed in the MATLAB Simulink environment. The main characteristics of this original solution are: on-chip learning algorithm implementation, high reconfiguration capability, and operation under real time constraints. An extended analysis has been carried out on the hardware resources used to implement the whole SOSNN network, as well as each individual component block

    Inner-Learning Mechanism Based Control Scheme for Manipulator with Multitasking and Changing Load

    No full text
    With the rapid development of robot technology and its application, manipulators may face complex tasks and dynamic environments in the coming future, which leads to two challenges of control: multitasking and changing load. In this paper, a novel multicontroller strategy is presented to meet such challenges. The presented controller is composed of three parts: subcontrollers, inner-learning mechanism, and switching rules. Each subcontroller is designed with self-learning skills to fit the changing load under a special task. When a new task comes, switching rule reselects the most suitable subcontroller as the working controller to handle current task instead of the older one. Inner-learning mechanism makes the subcontrollers learn from the working controller when load changes so that the switching action causes smaller tracking error than the traditional switch controller. The results of the simulation experiments on two-degree manipulator show the proposed method effect

    The <i>NRMSEs</i><sub>84</sub> comparisons of prediction performance for four networks.

    No full text
    <p>The <i>NRMSEs</i><sub>84</sub> comparisons of prediction performance for four networks.</p

    The <i>testMSEs</i> comparisons of prediction performance for four networks.

    No full text
    <p>The <i>testMSEs</i> comparisons of prediction performance for four networks.</p

    Comparison of <i>testMSEs</i> of LI-ESN and C-LI-ESN for the learning task <i>Ï„</i> = 30.

    No full text
    <p>The results are obtained by 20 independent realizations.</p
    corecore