32,751 research outputs found

    Innovative Methods in the Prediction and Analysis of Solar-Terrestrial Time Series

    Get PDF
    The aim of this thesis is to explore the application of feed forward neural networks, and other numerical methods, to the prediction and analysis of solar terrestrial time series. The three time series under scrutiny are the sunspot number, the 10.7cm solar flux and the geomagnetic Kp index. Each time series will be predicted and examined on time scales of days, months and years. As the work of the thesis unfolds, new perspectives on the time series of interest will be afforded, fueling the prediction intiatives of the later Chapters. New techniques for analysing time series are proposed and applied, as well as some new methods of using neural networks to make predictions. Chapter 1 reviews the three main fields of interest. The first field is that of the statistical theory of time series modelling. The basic concepts and terminology are introduced, followed by a review of various time series models and prediction schemes. The more recent topic of neural networks is the second reviewed field. Again the basic ideas are introduced, and the defining equations of feed forward neural networks are stated along with a complete description of the training algorithm known as back propagation. To link these first two fields I suggest how the neural network can be viewed as a statistical time series model. Next, the current understanding of the solar terrestrial environment is reviewed, starting with an overview of solar activity, with particular attention paid to the phenomena associated with the solar cycle. The terrestrial environment is then discussed, focussing on how the Sun and its activity affects the Earth's magnetic field. Finally, a selection of past attempts at predicting solar terrestrial time series are described and discussed. Chapter 2 is where the analysis of the three time series is documented. The work of this chapter is concerned with providing an impression of matters such as: the accumulation and formatting of the data; the search for periodicities; the nature of any periodicities; the non-stationarity of sunspot number; the stationary aspects of sunspot number; the auto-correlation of the time series; the cross-correlation of the time series, especially in relation to the Sun's influence on the Earth; and the use of wavelet transform in analysing time series. Apart from being of intrinsic interest in itself this work provides a familiarity with the data that will directly and indirectly fuel the prediction initiatives of the following chapters. Chapter 3 is an exploration of feed forward neural networks and back propagation. In this chapter some simple FFNNs performing some simple problems are investigated, as well as the (not simple) training algorithm, back propagation. For several cases it is shown what networks can, and cannot, be expected to do because of limitations of numbers of neurons or the activation function used. It is also shown that back-propagation is not always reliable as a training algorithm, as it sometimes completely fails in training networks to perform tasks that they should be able to perform in theory. An important new method, that of analytic training, is also introduced. This method shows how to "train" a neural network to perform any analytic function, by way of constructing and solving a set of linear equations. Chapter 4 further bridges the gap between neural network methods and the statistical models of time series. In this chapter, various artificial time series are predicted using neural networks, and in a few cases, analytic training is used to prescribe what the minimum requirements are for a network to be able to predict a given class and order of statistical time series model. The ability to compare theory and practice makes the results of this chapter very interesting. At the end of the Chapter, the problem of delayed prediction is highlighted. Delayed predictions are predictions in which events in the time series (such as peaks or troughs) are predicted late. (Abstract shortened by ProQuest.)

    Deep learning delay coordinate dynamics for chaotic attractors from partial observable data

    Full text link
    A common problem in time series analysis is to predict dynamics with only scalar or partial observations of the underlying dynamical system. For data on a smooth compact manifold, Takens theorem proves a time delayed embedding of the partial state is diffeomorphic to the attractor, although for chaotic and highly nonlinear systems learning these delay coordinate mappings is challenging. We utilize deep artificial neural networks (ANNs) to learn discrete discrete time maps and continuous time flows of the partial state. Given training data for the full state, we also learn a reconstruction map. Thus, predictions of a time series can be made from the current state and several previous observations with embedding parameters determined from time series analysis. The state space for time evolution is of comparable dimension to reduced order manifold models. These are advantages over recurrent neural network models, which require a high dimensional internal state or additional memory terms and hyperparameters. We demonstrate the capacity of deep ANNs to predict chaotic behavior from a scalar observation on a manifold of dimension three via the Lorenz system. We also consider multivariate observations on the Kuramoto-Sivashinsky equation, where the observation dimension required for accurately reproducing dynamics increases with the manifold dimension via the spatial extent of the system

    PNNARMA model: an alternative to phenomenological models in chemical reactors

    Get PDF
    This paper is focused on the development of non-linear neural models able to provide appropriate predictions when acting as process simulators. Parallel identification models can be used for this purpose. However, in this work it is shown that since the parameters of parallel identification models are estimated using multilayer feed-forward networks, the approximation of dynamic systems could be not suitable. The solution proposed in this work consists of building up parallel models using a particular recurrent neural network. This network allows to identify the parameter sets of the parallel model in order to generate process simulators. Hence, it is possible to guarantee better dynamic predictions. The dynamic behaviour of the heat transfer fluid temperature in a jacketed chemical reactor has been selected as a case study. The results suggest that parallel models based on the recurrent neural network proposed in this work can be seen as an alternative to phenomenological models for simulating the dynamic behaviour of the heating/cooling circuits.Publicad

    Applications of recurrent neural networks in batch reactors. Part I: NARMA modelling of the dynamic behaviour of the heat transfer fluid

    Get PDF
    This paper is focused on the development of nonlinear models, using artificial neural networks, able to provide appropriate predictions when acting as process simulators. The dynamic behaviour of the heat transfer fluid temperature in a jacketed chemical reactor has been selected as a case study. Different structures of NARMA (Non-linear ARMA) models have been studied. The experimental results have allowed to carry out a comparison between the different neural approaches and a first-principles model. The best neural results are obtained using a parallel model structure based on a recurrent neural network architecture, which guarantees better dynamic approximations than currently employed neural models. The results suggest that parallel models built up with recurrent networks can be seen as an alternative to phenomenological models for simulating the dynamic behaviour of the heating/cooling circuits which change from batch installation to installation.Publicad

    Phase models and clustering in networks of oscillators with delayed coupling

    Get PDF
    We consider a general model for a network of oscillators with time delayed, circulant coupling. We use the theory of weakly coupled oscillators to reduce the system of delay differential equations to a phase model where the time delay enters as a phase shift. We use the phase model to study the existence and stability of cluster solutions. Cluster solutions are phase locked solutions where the oscillators separate into groups. Oscillators within a group are synchronized while those in different groups are phase-locked. We give model independent existence and stability results for symmetric cluster solutions. We show that the presence of the time delay can lead to the coexistence of multiple stable clustering solutions. We apply our analytical results to a network of Morris Lecar neurons and compare these results with numerical continuation and simulation studies

    Modelling and control of chaotic processes through their Bifurcation Diagrams generated with the help of Recurrent Neural Network models: Part 1—simulation studies

    Get PDF
    Many real-world processes tend to be chaotic and also do not lead to satisfactory analytical modelling. It has been shown here that for such chaotic processes represented through short chaotic noisy time-series, a multi-input and multi-output recurrent neural networks model can be built which is capable of capturing the process trends and predicting the future values from any given starting condition. It is further shown that this capability can be achieved by the Recurrent Neural Network model when it is trained to very low value of mean squared error. Such a model can then be used for constructing the Bifurcation Diagram of the process leading to determination of desirable operating conditions. Further, this multi-input and multi-output model makes the process accessible for control using open-loop/closed-loop approaches or bifurcation control etc. All these studies have been carried out using a low dimensional discrete chaotic system of Hénon Map as a representative of some real-world processes
    corecore