67,502 research outputs found

    How hard is it to cross the room? -- Training (Recurrent) Neural Networks to steer a UAV

    Full text link
    This work explores the feasibility of steering a drone with a (recurrent) neural network, based on input from a forward looking camera, in the context of a high-level navigation task. We set up a generic framework for training a network to perform navigation tasks based on imitation learning. It can be applied to both aerial and land vehicles. As a proof of concept we apply it to a UAV (Unmanned Aerial Vehicle) in a simulated environment, learning to cross a room containing a number of obstacles. So far only feedforward neural networks (FNNs) have been used to train UAV control. To cope with more complex tasks, we propose the use of recurrent neural networks (RNN) instead and successfully train an LSTM (Long-Short Term Memory) network for controlling UAVs. Vision based control is a sequential prediction problem, known for its highly correlated input data. The correlation makes training a network hard, especially an RNN. To overcome this issue, we investigate an alternative sampling method during training, namely window-wise truncated backpropagation through time (WW-TBPTT). Further, end-to-end training requires a lot of data which often is not available. Therefore, we compare the performance of retraining only the Fully Connected (FC) and LSTM control layers with networks which are trained end-to-end. Performing the relatively simple task of crossing a room already reveals important guidelines and good practices for training neural control networks. Different visualizations help to explain the behavior learned.Comment: 12 pages, 30 figure

    Dynamic recurrent neural networks for stable adaptive control of wing rock motion

    Get PDF
    Wing rock is a self-sustaining limit cycle oscillation (LCO) which occurs as the result of nonlinear coupling between the dynamic response of the aircraft and the unsteady aerodynamic forces. In this thesis, dynamic recurrent RBF (Radial Basis Function) network control methodology is proposed to control the wing rock motion. The concept based on the properties of the Presiach hysteresis model is used in the design of dynamic neural networks. The structure and memory mechanism in the Preisach model is analogous to the parallel connectivity and memory formation in the RBF neural networks. The proposed dynamic recurrent neural network has a feature for adding or pruning the neurons in the hidden layer according to the growth criteria based on the properties of ensemble average memory formation of the Preisach model. The recurrent feature of the RBF network deals with the dynamic nonlinearities and endowed temporal memories of the hysteresis model. The control of wing rock is a tracking problem, the trajectory starts from non-zero initial conditions and it tends to zero as time goes to infinity. In the proposed neural control structure, the recurrent dynamic RBF network performs identification process in order to approximate the unknown non-linearities of the physical system based on the input-output data obtained from the wing rock phenomenon. The design of the RBF networks together with the network controllers are carried out in discrete time domain. The recurrent RBF networks employ two separate adaptation schemes where the RBF's centre and width are adjusted by the Extended Kalman Filter in order to give a minimum networks size, while the outer networks layer weights are updated using the algorithm derived from Lyapunov stability analysis for the stable closed loop control. The issue of the robustness of the recurrent RBF networks is also addressed. The effectiveness of the proposed dynamic recurrent neural control methodology is demonstrated through simulations to suppress the wing rock motion in AFTI/F-16 testbed aircraft having the delta wing configuration. The potential implementation as well as the practicality of the control methodology are also discusse

    Online Training of a Generalized Neuron with Particle Swarm Optimization

    Get PDF
    Neural networks are used in a wide number of fields including signal and image processing, modeling and control and pattern recognition. Some of the most common type of neural networks is the multilayer perceptrons and the recurrent neural networks. Most of these networks consist of large number of neurons and hidden layers, which results in a longer training time. A Generalized Neuron (GN) has a compact structure and overcomes the problem of long training time. Due to its simple structure and lesser memory requirements, the GN is attractive for hardware implementations. This paper presents the online training of a GN with the Particle Swarm Optimization (PSO) algorithm. A comparative study of the GN and the MLP online trained with PSO is presented for function approximations. The GN based identification of the Static VAR Compensator (SVC) dynamics in a 12 bus FACTS benchmark power system trained online with the PSO is also presented

    Incremental construction of LSTM recurrent neural network

    Get PDF
    Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and fully connected hidden layers as well as two different levels of freezing previous weights in the cascade case. GLSTM has been applied to a forecasting problem in a biomedical domain, where the input/output behavior of five controllers of the Central Nervous System control has to be modelled. We have compared growing LSTM results against other neural networks approaches, and our work applying conventional LSTM to the task at hand.Postprint (published version

    On the validity of memristor modeling in the neural network literature

    Full text link
    An analysis of the literature shows that there are two types of non-memristive models that have been widely used in the modeling of so-called "memristive" neural networks. Here, we demonstrate that such models have nothing in common with the concept of memristive elements: they describe either non-linear resistors or certain bi-state systems, which all are devices without memory. Therefore, the results presented in a significant number of publications are at least questionable, if not completely irrelevant to the actual field of memristive neural networks
    • …
    corecore