456 research outputs found

    Financial time series prediction using spiking neural networks

    Get PDF
    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. © 2014 Reid et al

    A Comprehensive Survey on Pi-Sigma Neural Network for Time Series Prediction

    Get PDF
    Prediction of time series grabs received much attention because of its effect on the vast range of real life applications. This paper presents a survey of time series applications using Higher Order Neural Network (HONN) model. The basic motivation behind using HONN is the ability to expand the input space, to solve complex problems it becomes more efficient and perform high learning abilities of the time series forecasting. Pi-Sigma Neural Network (PSNN) includes indirectly the capabilities of higher order networks using product cells as the output units and less number of weights. The goal of this research is to present the reader awareness about PSNN for time series prediction, to highlight some benefits and challenges using PSNN. Possible fields of PSNN applications in comparison with existing methods are presented and future directions are also explored in advantage with the properties of error feedback and recurrent networks

    FPGA implementation of a LSTM Neural Network

    Get PDF
    Este trabalho pretende fazer uma implementação customizada, em Hardware, duma Rede Neuronal Long Short-Term Memory. O modelo python, assim como a descrição Verilog, e síntese RTL, encontram-se terminadas. Falta apenas fazer o benchmarking e a integração de um sistema de aprendizagem

    A neural network face detector design using bit-width reduced FPU in FPGA

    Get PDF
    This thesis implemented a field programmable gate array (FPGA)-based face detector using a neural network (NN), as well as a bit-width reduced floating-point unit (FPU). An NN was used to easily separate face data and non-face data in the face detector. The NN performs time consuming repetitive calculation. This time consuming problem was solved by a Field Programmable Gate Array (FPGA) device and a bit-width reduced FPU in this thesis. A floating-point bit-width reduction provided a significant saving of hardware resources, such as area and power.The analytical error model, using the maximum relative representation error (MRRE) and the average relative representation error (ARRE), was developed to obtain the maximum and average output errors for the bit-width reduced FPUs. After the development of the analytical error model, the bit-width reduced FPUs and an NN were designed using MATLAB and VHDL. Finally, the analytical (MATLAB) results, along with the experimental (VHDL) results, were compared. The analytical results and the experimental results showed conformity of shape. It was also found that while maintaining 94.1% detection accuracy, a reduction in bit-width from 32 bits to 16 bits reduced the size of memory and arithmetic units by 50%, and the total power consumption by 14.7%

    Machine Learning Based Data Driven Modelling of Time Series of Power Plant Data

    Get PDF
    Accurate modeling and simulation of data collected from a power plant system are important factors in the strategic planning and maintenance of the unit. Several non-linearities and multivariable couplings are associated with real-world plants. Therefore, it becomes almost impossible to model the system using conventional mathematical equations. Statistical models such as ARIMA, ARMA are potential solutions but their linear nature cannot very well t a system with non-linear, multivariate time series data. Recently, deep learning methods such as Arti cial Neural Networks (ANNs) have been extensively applied for time series forecasting. ANNs in contrast to stochastic models such as ARIMA can uncover the non-linearities present underneath the data. In this thesis, we analyze the real-time temperature data obtained from a nuclear power plant, and discover the patterns and characteristics of the sensory data. Principal Component Analysis (PCA) followed by Linear Discriminant Analysis (LDA) is used to extract features from the time series data; k-means clustering is applied to label the data instances. Finite state machine representation formulated from the clustered data is then used to model the behaviour of nuclear power plants using system states and state transitions. Dependent and independent parameters of the system are de ned based on co-relation among themselves. Various forecasting models are then applied over multivariate time-stamped data. We discuss thoroughly the implementation of a key architecture of neural networks, Long Short-Term Neural Networks (LSTMs). LSTM can capture nonlinear relationships in a dynamic system using its memory connections. This further aids them to counter the problem of back-propagated error decay through memory blocks. Poly-regression is applied to represent the working of the plant by de ning an association between independent and dependent parameters. This representation is then used to forecast dependent variates based on the observed values of independent variates. Principle of sensitivity analysis is used for optimisation of number of parameters used for predicting. It helps in making a compromise between number of parameters used and level of accuracy achieved in forecasting. The objective of this thesis is to examine the feasibility of the above-mentioned forecasting techniques in the modeling of a complex time series of data, and predicting system parameters such as Reactor Temperature and Linear Power based on past information. It also carries out a comparative analysis of forecasts obtained in each approach

    Predicting the Daily Return Direction of the Stock Market using Hybrid Machine Learning Algorithms

    Get PDF
    Big data analytic techniques associated with machine learning algorithms are playing an increasingly important role in various application fields, including stock market investment. However, few studies have focused on forecasting daily stock market returns, especially when using powerful machine learning techniques, such as deep neural networks (DNNs), to perform the analyses. DNNs employ various deep learning algorithms based on the combination of network structure, activation function, and model parameters, with their performance depending on the format of the data representation. This paper presents a comprehensive big data analytics process to predict the daily return direction of the SPDR S&P 500 ETF (ticker symbol: SPY) based on 60 financial and economic features. DNNs and traditional artificial neural networks (ANNs) are then deployed over the entire preprocessed but untransformed dataset, along with two datasets transformed via principal component analysis (PCA), to predict the daily direction of future stock market index returns. While controlling for overfitting, a pattern for the classification accuracy of the DNNs is detected and demonstrated as the number of the hidden layers increases gradually from 12 to 1000. Moreover, a set of hypothesis testing procedures are implemented on the classification, and the simulation results show that the DNNs using two PCA-represented datasets give significantly higher classification accuracy than those using the entire untransformed dataset, as well as several other hybrid machine learning algorithms. In addition, the trading strategies guided by the DNN classification process based on PCA-represented data perform slightly better than the others tested, including in a comparison against two standard benchmarks

    Recurrent error-based ridge polynomial neural networks for time series forecasting

    Get PDF
    Time series forecasting has attracted much attention due to its impact on many practical applications. Neural networks (NNs) have been attracting widespread interest as a promising tool for time series forecasting. The majority of NNs employ only autoregressive (AR) inputs (i.e., lagged time series values) when forecasting time series. Moving-average (MA) inputs (i.e., errors) however have not adequately considered. The use of MA inputs, which can be done by feeding back forecasting errors as extra network inputs, alongside AR inputs help to produce more accurate forecasts. Among numerous existing NNs architectures, higher order neural networks (HONNs), which have a single layer of learnable weights, were considered in this research work as they have demonstrated an ability to deal with time series forecasting and have an simple architecture. Based on two HONNs models, namely the feedforward ridge polynomial neural network (RPNN) and the recurrent dynamic ridge polynomial neural network (DRPNN), two recurrent error-based models were proposed. These models were called the ridge polynomial neural network with error feedback (RPNN-EF) and the ridge polynomial neural network with error-output feedbacks (RPNN-EOF). Extensive simulations covering ten time series were performed. Besides RPNN and DRPNN, a pi-sigma neural network and a Jordan pi-sigma neural network were used for comparison. Simulation results showed that introducing error feedback to the models lead to significant forecasting performance improvements. Furthermore, it was found that the proposed models outperformed many state-of-the-art models. It was concluded that the proposed models have the capability to efficiently forecast time series and that practitioners could benefit from using these forecasting models
    corecore