5,185 research outputs found

    Incremental construction of LSTM recurrent neural network

    Get PDF
    Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and fully connected hidden layers as well as two different levels of freezing previous weights in the cascade case. GLSTM has been applied to a forecasting problem in a biomedical domain, where the input/output behavior of five controllers of the Central Nervous System control has to be modelled. We have compared growing LSTM results against other neural networks approaches, and our work applying conventional LSTM to the task at hand.Postprint (published version

    MLP and Elman recurrent neural network modelling for the TRMS

    Get PDF
    This paper presents a scrutinized investigation on system identification using artificial neural network (ANNs). The main goal for this work is to emphasis the potential benefits of this architecture for real system identification. Among the most prevalent networks are multi-layered perceptron NNs using Levenberg-Marquardt (LM) training algorithm and Elman recurrent NNs. These methods are used for the identification of a twin rotor multi-input multi-output system (TRMS). The TRMS can be perceived as a static test rig for an air vehicle with formidable control challenges. Therefore, an analysis in modeling of nonlinear aerodynamic function is needed and carried out in both time and frequency domains based on observed input and output data. Experimental results are obtained using a laboratory set-up system, confirming the viability and effectiveness of the proposed methodology

    Neuro-Controller Design by Using the Multifeedback Layer Neural Network and the Particle Swarm Optimization

    Get PDF
    In the present study, a novel neuro-controller is suggested for hard disk drive (HDD) systems in addition to nonlinear dynamic systems using the Multifeedback-Layer Neural Network (MFLNN) proposed in recent years. In neuro-controller design problems, since the derivative based train methods such as the back-propagation and Levenberg-Marquart (LM) methods necessitate the reference values of the neural network’s output or Jacobian of the dynamic system for the duration of the train, the connection weights of the MFLNN employed in the present work are updated using the Particle Swarm Optimization (PSO) algorithm that does not need such information. The PSO method is improved by some alterations to augment the performance of the standard PSO. First of all, this MFLNN-PSO controller is applied to different nonlinear dynamical systems. Afterwards, the proposed method is applied to a HDD as a real system. Simulation results demonstrate the effectiveness of the proposed controller on the control of dynamic and HDD systems

    The Integration of Connectionism and First-Order Knowledge Representation and Reasoning as a Challenge for Artificial Intelligence

    Get PDF
    Intelligent systems based on first-order logic on the one hand, and on artificial neural networks (also called connectionist systems) on the other, differ substantially. It would be very desirable to combine the robust neural networking machinery with symbolic knowledge representation and reasoning paradigms like logic programming in such a way that the strengths of either paradigm will be retained. Current state-of-the-art research, however, fails by far to achieve this ultimate goal. As one of the main obstacles to be overcome we perceive the question how symbolic knowledge can be encoded by means of connectionist systems: Satisfactory answers to this will naturally lead the way to knowledge extraction algorithms and to integrated neural-symbolic systems.Comment: In Proceedings of INFORMATION'2004, Tokyo, Japan, to appear. 12 page
    corecore