50,257 research outputs found

    Investigating the Predictability of a Chaotic Time-Series Data using Reservoir Computing, Deep-Learning and Machine- Learning on the Short-, Medium- and Long-Term Pricing of Bitcoin and Ethereum.

    Get PDF
    This study will investigate the predictability of a Chaotic time-series data using Reservoir computing (Echo State Network), Deep-Learning(LSTM) and Machine- Learning(Linear, Bayesian, ElasticNetCV , Random Forest, XGBoost Regression and a machine learning Neural Network) on the short (1-day out prediction), medium (5-day out prediction) and long-term (30-day out prediction) pricing of Bitcoin and Ethereum Using a range of machine learning tools, to perform feature selection by permutation importance to select technical indicators on the individual cryptocurrencies, to ensure the datasets are the best for predictions per cryptocurrency while reducing noise within the models. The predictability of these two chaotic time-series is then compared to evaluate the models to find the best fit model. The models are fine-tuned, with hyperparameters, design of the network within the LSTM and the reservoir size within the Echo State Network being adjusted to improve accuracy and speed. This research highlights the effect of the trends within the cryptocurrency and its effect on predictive models, these models will then be optimized with hyperparameter tuning, and be evaluated to compare the models across the two currencies. It is found that the datasets for each cryptocurrency are different, due to the different permutation importance, which does not affect the overall predictability of the models with the short and medium-term predictions having the same models being the top performers. This research confirms that the chaotic data although can have positive results for shortand medium-term prediction, for long-term prediction, technical analysis basedprediction is not sufficient

    Comparative analysis of neural networks techniques to forecast Global Horizontal Irradiance

    Get PDF
    Due to the continuous increasing importance of renewable energy sources as an alternative to fossil fuels, to contrast air pollution and global warming, the prediction of Global Horizontal Irradiation (GHI), one of the main parameters determining solar energy production of photovoltaic systems, represents an attractive topic nowadays. Solar irradiance is determined by deterministic factors (i.e. the position of the sun) and stochastic factors (i.e. the presence of clouds). Since the stochastic element is difficult to model, this problem can benefit from machine learning techniques, like artificial neural networks. This work proposes a methodology to forecast GHI in short- (i.e. from 15 min to 60 min) and mid-term (i.e. from 60 to 120 min) time horizons. For this purpose, we designed, optimised and compared four neural network architectures for time-series forecasting, respectively based on: i) Non-Linear Autoregressive, ii) Feed-Forward, iii) Long Short-Term Memory and iv) Echo State Network. The original data-set, consisting of GHI values sampled every 15min, has been pre-processed by applying different filtering techniques. Our results analysis compares the performance of the proposed neural networks identifying the best in terms of error rate and forecast horizon. This analysis highlights that the clear-sky index results the preferred filtering technique by giving greatly improvements in data-set pre-processing, and Echo State Network gives best accuracy results

    Robust Reservoir Computing Approaches for Predicting Cardiac Electrical Dynamics

    Get PDF
    Computational modeling of cardiac electrophysiological signaling is of vital importance in understanding, preventing, and treating life-threatening arrhythmias. Traditionally, mathematical models incorporating physical principles have been used to study cardiac dynamical systems and can generate mechanistic insights, but their predictions are often quantitatively inaccurate due to model complexity, the lack of observability in the system, and variability within individuals and across the population. In contrast, machine-learning techniques can learn directly from training data, which in this context are time series of observed state variables, without prior knowledge of the system dynamics. The reservoir computing framework, a learning paradigm derived from recurrent neural network concepts and most commonly realized as an echo state network (ESN), offers a streamlined training process and holds promise to deliver more accurate predictions than mechanistic models. Accordingly, this research aims to develop robust ESN-based forecasting approaches for nonlinear cardiac electrodynamics, and thus presents the first application of machine-learning, and deep-learning techniques in particular, for modeling the complex electrical dynamics of cardiac cells and tissue. To accomplish this goal, we completed a set of three projects. (i) We compared the performance of available mainstream techniques for prediction with that of the baseline ESN approach along with several new ESN variants we proposed, including a physics-informed hybrid ESN. (ii) We proposed a novel integrated approach, the autoencoder echo state network (AE-ESN), that can accurately forecast the long-term future dynamics of cardiac electrical activity. This technique takes advantage of the best characteristics of both gated recurrent neural networks and ESNs by integrating a long short-term memory (LSTM) autoencoder into the ESN framework to improve reliability and robustness. (iii) We extended the long-term prediction of cardiac electrodynamics from a single cardiac cell to the tissue level, where, in addition to the temporal information, the data includes spatial dimensions and diffusive coupling. Building on the main design idea of the AE-ESN, a convolutional autoencoder was equipped with an ESN to create the Conv-ESN technique, which can process the spatiotemporal data and effectively capture the temporal dependencies between samples of data. Using these techniques, we forecast cardiac electrodynamics for a variety of datasets obtained in both in silico and in vitro experiments. We found that the proposed integrated approaches provide robust and computationally efficient techniques that can successfully predict the dynamics of electrical activity in cardiac cells and tissue with higher prediction accuracy than mainstream deep-learning approaches commonly used for predicting temporal data. On the application side, our approaches provide accurate forecasts over clinically useful time periods that could allow prediction of electrical problems with sufficient time for intervention and thus may support new types of treatments for some kinds of heart disease.Ph.D

    Product Reservoir Computing: Time-Series Computation with Multiplicative Neurons

    Full text link
    Echo state networks (ESN), a type of reservoir computing (RC) architecture, are efficient and accurate artificial neural systems for time series processing and learning. An ESN consists of a core of recurrent neural networks, called a reservoir, with a small number of tunable parameters to generate a high-dimensional representation of an input, and a readout layer which is easily trained using regression to produce a desired output from the reservoir states. Certain computational tasks involve real-time calculation of high-order time correlations, which requires nonlinear transformation either in the reservoir or the readout layer. Traditional ESN employs a reservoir with sigmoid or tanh function neurons. In contrast, some types of biological neurons obey response curves that can be described as a product unit rather than a sum and threshold. Inspired by this class of neurons, we introduce a RC architecture with a reservoir of product nodes for time series computation. We find that the product RC shows many properties of standard ESN such as short-term memory and nonlinear capacity. On standard benchmarks for chaotic prediction tasks, the product RC maintains the performance of a standard nonlinear ESN while being more amenable to mathematical analysis. Our study provides evidence that such networks are powerful in highly nonlinear tasks owing to high-order statistics generated by the recurrent product node reservoir

    Training Echo State Networks with Regularization through Dimensionality Reduction

    Get PDF
    In this paper we introduce a new framework to train an Echo State Network to predict real valued time-series. The method consists in projecting the output of the internal layer of the network on a space with lower dimensionality, before training the output layer to learn the target task. Notably, we enforce a regularization constraint that leads to better generalization capabilities. We evaluate the performances of our approach on several benchmark tests, using different techniques to train the readout of the network, achieving superior predictive performance when using the proposed framework. Finally, we provide an insight on the effectiveness of the implemented mechanics through a visualization of the trajectory in the phase space and relying on the methodologies of nonlinear time-series analysis. By applying our method on well known chaotic systems, we provide evidence that the lower dimensional embedding retains the dynamical properties of the underlying system better than the full-dimensional internal states of the network

    Integer Echo State Networks: Hyperdimensional Reservoir Computing

    Full text link
    We propose an approximation of Echo State Networks (ESN) that can be efficiently implemented on digital hardware based on the mathematics of hyperdimensional computing. The reservoir of the proposed Integer Echo State Network (intESN) is a vector containing only n-bits integers (where n<8 is normally sufficient for a satisfactory performance). The recurrent matrix multiplication is replaced with an efficient cyclic shift operation. The intESN architecture is verified with typical tasks in reservoir computing: memorizing of a sequence of inputs; classifying time-series; learning dynamic processes. Such an architecture results in dramatic improvements in memory footprint and computational efficiency, with minimal performance loss.Comment: 10 pages, 10 figures, 1 tabl
    • …
    corecore