155 research outputs found

    Competition and collaboration in cooperative coevolution of Elman recurrent neural networks for time - series prediction

    Get PDF
    Collaboration enables weak species to survive in an environment where different species compete for limited resources. Cooperative coevolution (CC) is a nature-inspired optimization method that divides a problem into subcomponents and evolves them while genetically isolating them. Problem decomposition is an important aspect in using CC for neuroevolution. CC employs different problem decomposition methods to decompose the neural network training problem into subcomponents. Different problem decomposition methods have features that are helpful at different stages in the evolutionary process. Adaptation, collaboration, and competition are needed for CC, as multiple subpopulations are used to represent the problem. It is important to add collaboration and competition in CC. This paper presents a competitive CC method for training recurrent neural networks for chaotic time-series prediction. Two different instances of the competitive method are proposed that employs different problem decomposition methods to enforce island-based competition. The results show improvement in the performance of the proposed methods in most cases when compared with standalone CC and other methods from the literature

    Cooperative coevolution of Elman recurrent neural networks for chaotic time series prediction

    Get PDF
    Cooperative coevolution decomposes a problem into subcomponents and employs evolutionary algorithms for solving them. Cooperative coevolution has been effective for evolving neural networks. Different problem decomposition methods in cooperative coevolution determine how a neural network is decomposed and encoded which affects its performance. A good problem decomposition method should provide enough diversity and also group interacting variables which are the synapses in the neural network. Neural networks have shown promising results in chaotic time series prediction. This work employs two problem decomposition methods for training Elman recurrent neural networks on chaotic time series problems. The Mackey-Glass, Lorenz and Sunspot time series are used to demonstrate the performance of the cooperative neuro-evolutionary methods. The results show improvement in performance in terms of accuracy when compared to some of the methods from literature

    Cooperative neuro - evolution of Elman recurrent networks for tropical cyclone wind - intensity prediction in the South Pacific region

    Get PDF
    Climate change issues are continuously on the rise and the need to build models and software systems for management of natural disasters such as cyclones is increasing. Cyclone wind-intensity prediction looks into efficient models to forecast the wind-intensification in tropical cyclones which can be used as a means of taking precautionary measures. If the wind-intensity is determined with high precision a few hours prior, evacuation and further precautionary measures can take place. Neural networks have become popular as efficient tools for forecasting. Recent work in neuro-evolution of Elman recurrent neural network showed promising performance for benchmark problems. This paper employs Cooperative Coevolution method for training Elman recurrent neural networks for Cyclone wind- intensity prediction in the South Pacific region. The results show very promising performance in terms of prediction using different parameters in time series data reconstruction

    Shape and Time Distortion Loss for Training Deep Time Series Forecasting Models

    Get PDF
    International audienceThis paper addresses the problem of time series forecasting for non-stationarysignals and multiple future steps prediction. To handle this challenging task, weintroduce DILATE (DIstortion Loss including shApe and TimE), a new objectivefunction for training deep neural networks. DILATE aims at accurately predictingsudden changes, and explicitly incorporates two terms supporting precise shapeand temporal change detection. We introduce a differentiable loss function suitablefor training deep neural nets, and provide a custom back-prop implementation forspeeding up optimization. We also introduce a variant of DILATE, which providesa smooth generalization of temporally-constrained Dynamic Time Warping (DTW).Experiments carried out on various non-stationary datasets reveal the very goodbehaviour of DILATE compared to models trained with the standard Mean SquaredError (MSE) loss function, and also to DTW and variants. DILATE is also agnosticto the choice of the model, and we highlight its benefit for training fully connectednetworks as well as specialized recurrent architectures, showing its capacity toimprove over state-of-the-art trajectory forecasting approaches

    Application of cooperative neuro - evolution of Elman recurrent networks for a two - dimensional cyclone track prediction for the South Pacific region

    Get PDF
    This paper presents a two-dimensional time series prediction approach for cyclone track prediction using cooperative neuro-evolution of Elman recurrent networks in the South Pacific region. The latitude and longitude of tracks of cyclone lifetime is taken into consideration for past three decades to build a robust forecasting system. The proposed method performs one step ahead prediction of the cyclone position which is essentially a two-dimensional time series prediction problem. The results show that the Elman recurrent network is able to achieve very good accuracy in terms of prediction of the tracks which can be used as means of taking precautionary measures

    Modified Neuron-Synapse level problem decomposition method for Cooperative Coevolution of Feedforward Networks for Time Series Prediction

    Get PDF
    Complex problems have been solved efficiently through decomposition of a particular problem using problem decompositions. Even combination of different distinct problem decomposition methods has shown good results in time series prediction. The application of different problem decomposition methods at different stages of a network can share its strengths to solve the problem in hand better. Hybrid versions of two distinct problem decomposition methods has showed promising results in past. In this paper, a modified version of latterly introduced Neuron-Synapse level problem decomposition is proposed using feedforward neural networks for time series prediction. The results shows that the proposed modified model has got better results in more datasets when compared to its previous version. The results are better in some cases for proposed method in comparison to several other methods from the literature

    Problem Decomposition and Adaptation in Cooperative Neuro-Evolution

    No full text
    One way to train neural networks is to use evolutionary algorithms such as cooperative coevolution - a method that decomposes the network's learnable parameters into subsets, called subcomponents. Cooperative coevolution gains advantage over other methods by evolving particular subcomponents independently from the rest of the network. Its success depends strongly on how the problem decomposition is carried out. This thesis suggests new forms of problem decomposition, based on a novel and intuitive choice of modularity, and examines in detail at what stage and to what extent the different decomposition methods should be used. The new methods are evaluated by training feedforward networks to solve pattern classification tasks, and by training recurrent networks to solve grammatical inference problems. Efficient problem decomposition methods group interacting variables into the same subcomponents. We examine the methods from the literature and provide an analysis of the nature of the neural network optimization problem in terms of interacting variables. We then present a novel problem decomposition method that groups interacting variables and that can be generalized to neural networks with more than a single hidden layer. We then incorporate local search into cooperative neuro-evolution. We present a memetic cooperative coevolution method that takes into account the cost of employing local search across several sub-populations. The optimisation process changes during evolution in terms of diversity and interacting variables. To address this, we examine the adaptation of the problem decomposition method during the evolutionary process. The results in this thesis show that the proposed methods improve performance in terms of optimization time, scalability and robustness. As a further test, we apply the problem decomposition and adaptive cooperative coevolution methods for training recurrent neural networks on chaotic time series problems. The proposed methods show better performance in terms of accuracy and robustness

    Theory and Practice of Computing with Excitable Dynamics

    Get PDF
    Reservoir computing (RC) is a promising paradigm for time series processing. In this paradigm, the desired output is computed by combining measurements of an excitable system that responds to time-dependent exogenous stimuli. The excitable system is called a reservoir and measurements of its state are combined using a readout layer to produce a target output. The power of RC is attributed to an emergent short-term memory in dynamical systems and has been analyzed mathematically for both linear and nonlinear dynamical systems. The theory of RC treats only the macroscopic properties of the reservoir, without reference to the underlying medium it is made of. As a result, RC is particularly attractive for building computational devices using emerging technologies whose structure is not exactly controllable, such as self-assembled nanoscale circuits. RC has lacked a formal framework for performance analysis and prediction that goes beyond memory properties. To provide such a framework, here a mathematical theory of memory and information processing in ordered and disordered linear dynamical systems is developed. This theory analyzes the optimal readout layer for a given task. The focus of the theory is a standard model of RC, the echo state network (ESN). An ESN consists of a fixed recurrent neural network that is driven by an external signal. The dynamics of the network is then combined linearly with readout weights to produce the desired output. The readout weights are calculated using linear regression. Using an analysis of regression equations, the readout weights can be calculated using only the statistical properties of the reservoir dynamics, the input signal, and the desired output. The readout layer weights can be calculated from a priori knowledge of the desired function to be computed and the weight matrix of the reservoir. This formulation explicitly depends on the input weights, the reservoir weights, and the statistics of the target function. This formulation is used to bound the expected error of the system for a given target function. The effects of input-output correlation and complex network structure in the reservoir on the computational performance of the system have been mathematically characterized. Far from the chaotic regime, ordered linear networks exhibit a homogeneous decay of memory in different dimensions, which keeps the input history coherent. As disorder is introduced in the structure of the network, memory decay becomes inhomogeneous along different dimensions causing decoherence in the input history, and degradation in task-solving performance. Close to the chaotic regime, the ordered systems show loss of temporal information in the input history, and therefore inability to solve tasks. However, by introducing disorder and therefore heterogeneous decay of memory the temporal information of input history is preserved and the task-solving performance is recovered. Thus for systems at the edge of chaos, disordered structure may enhance temporal information processing. Although the current framework only applies to linear systems, in principle it can be used to describe the properties of physical reservoir computing, e.g., photonic RC using short coherence-length light

    A Novel Approach for Modeling Neural Responses to Joint Perturbations Using the NARMAX Method and a Hierarchical Neural Network

    Get PDF
    The human nervous system is an ensemble of connected neuronal networks. Modeling and system identification of the human nervous system helps us understand how the brain processes sensory input and controls responses at the systems level. This study aims to propose an advanced approach based on a hierarchical neural network and non-linear system identification method to model neural activity in the nervous system in response to an external somatosensory input. The proposed approach incorporates basic concepts of Non-linear AutoRegressive Moving Average Model with eXogenous input (NARMAX) and neural network to acknowledge non-linear closed-loop neural interactions. Different from the commonly used polynomial NARMAX method, the proposed approach replaced the polynomial non-linear terms with a hierarchical neural network. The hierarchical neural network is built based on known neuroanatomical connections and corresponding transmission delays in neural pathways. The proposed method is applied to an experimental dataset, where cortical activities from ten young able-bodied individuals are extracted from electroencephalographic signals while applying mechanical perturbations to their wrist joint. The results yielded by the proposed method were compared with those obtained by the polynomial NARMAX and Volterra methods, evaluated by the variance accounted for (VAF). Both the proposed and polynomial NARMAX methods yielded much better modeling results than the Volterra model. Furthermore, the proposed method modeled cortical responded with a mean VAF of 69.35% for a three-step ahead prediction, which is significantly better than the VAF from a polynomial NARMAX model (mean VAF 47.09%). This study provides a novel approach for precise modeling of cortical responses to sensory input. The results indicate that the incorporation of knowledge of neuroanatomical connections in building a realistic model greatly improves the performance of system identification of the human nervous system
    • …
    corecore