12 research outputs found

    Higher order neural networks for financial time series prediction

    Get PDF
    Neural networks have been shown to be a promising tool for forecasting financial times series. Numerous research and applications of neural networks in business have proven their advantage in relation to classical methods that do not include artificial intelligence. What makes this particular use of neural networks so attractive to financial analysts and traders is the fact that governments and companies benefit from it to make decisions on investment and trading. However, when the number of inputs to the model and the number of training examples becomes extremely large, the training procedure for ordinary neural network architectures becomes tremendously slow and unduly tedious. To overcome such time-consuming operations, this research work focuses on using various Higher Order Neural Networks (HONNs) which have a single layer of learnable weights, therefore reducing the networks' complexity. In order to predict the upcoming trends of univariate financial time series signals, three HONNs models; the Pi-Sigma Neural Network, the Functional Link Neural Network, and the Ridge Polynomial Neural Network were used, as well as the Multilayer Perceptron. Furthermore, a novel neural network architecture which comprises of a feedback connection in addition to the feedforward Ridge Polynomial Neural Network was constructed. The proposed network combines the properties of both higher order and recurrent neural networks, and is called Dynamic Ridge Polynomial Neural Network (DRPNN). Extensive simulations covering ten financial time series were performed. The forecasting performance of various feedforward HONNs models, the Multilayer Perceptron and the novel DRPNN was compared. Simulation results indicate that HONNs, particularly the DRPNN in most cases demonstrated advantages in capturing chaotic movement in the financial signals with an improvement in the profit return over other network models. The relative superiority of DRPNN to other networks is not just its ability to attain high profit return, but rather to model the training set with fast learning and convergence. The network offers fast training and shows considerable promise as a forecasting tool. It is concluded that DRPNN do have the capability to forecast the financial markets, and individual investor could benefit from the use of this forecasting

    Corporation robots

    Get PDF
    Nowadays, various robots are built to perform multiple tasks. Multiple robots working together to perform a single task becomes important. One of the key elements for multiple robots to work together is the robot need to able to follow another robot. This project is mainly concerned on the design and construction of the robots that can follow line. In this project, focuses on building line following robots leader and slave. Both of these robots will follow the line and carry load. A Single robot has a limitation on handle load capacity such as cannot handle heavy load and cannot handle long size load. To overcome this limitation an easier way is to have a groups of mobile robots working together to accomplish an aim that no single robot can do alon

    Modelling commodity value at risk with Psi Sigma neural networks using open–high–low–close data

    Get PDF
    The motivation for this paper is to investigate the use of a promising class of neural network models, Psi Sigma (PSI), when applied to the task of forecasting the one-day ahead value at risk (VaR) of the oil Brent and gold bullion series using open–high–low–close data. In order to benchmark our results, we also consider VaR forecasts from two different neural network designs, the multilayer perceptron and the recurrent neural network, a genetic programming algorithm, an extreme value theory model along with some traditional techniques such as an ARMA-Glosten, Jagannathan, and Runkle (1,1) model and the RiskMetrics volatility. The forecasting performance of all models for computing the VaR of the Brent oil and the gold bullion is examined over the period September 2001–August 2010 using the last year and half of data for out-of-sample testing. The evaluation of our models is done by using a series of backtesting algorithms such as the Christoffersen tests, the violation ratio and our proposed loss function that considers not only the number of violations but also their magnitude. Our results show that the PSI outperforms all other models in forecasting the VaR of gold and oil at both the 5% and 1% confidence levels, providing an accurate number of independent violations with small magnitude

    Generalised correlation higher order neural networks, neural network operation and Levenberg-Marquardt training on field programmable gate arrays

    Get PDF
    Higher Order Neural Networks (HONNs) were introduced in the late 80's as a solution to the increasing complexity within Neural Networks (NNs). Similar to NNs HONNs excel at performing pattern recognition, classification, optimisation particularly for non-linear systems in varied applications such as communication channel equalisation, real time intelligent control, and intrusion detection. This research introduced new HONNs called the Generalised Correlation Higher Order Neural Networks which as an extension to the ordinary first order NNs and HONNs, based on interlinked arrays of correlators with known relationships, they provide the NN with a more extensive view by introducing interactions between the data as an input to the NN model. All studies included two data sets to generalise the applicability of the findings. The research investigated the performance of HONNs in the estimation of short term returns of two financial data sets, the FTSE 100 and NASDAQ. The new models were compared against several financial models and ordinary NNs. Two new HONNs, the Correlation HONN (C-HONN) and the Horizontal HONN (Horiz-HONN) outperformed all other models tested in terms of the Akaike Information Criterion (AIC). The new work also investigated HONNs for camera calibration and image mapping. HONNs were compared against NNs and standard analytical methods in terms of mapping performance for three cases; 3D-to-2D mapping, a hybrid model combining HONNs with an analytical model, and 2D-to-3D inverse mapping. This study considered 2 types of data, planar data and co-planar (cube) data. To our knowledge this is the first study comparing HONNs against NNs and analytical models for camera calibration. HONNs were able to transform the reference grid onto the correct camera coordinate and vice versa, an aspect that the standard analytical model fails to perform with the type of data used. HONN 3D-to-2D mapping had calibration error lower than the parametric model by up to 24% for plane data and 43% for cube data. The hybrid model also had lower calibration error than the parametric model by 12% for plane data and 34% for cube data. However, the hybrid model did not outperform the fully non-parametric models. Using HONNs for inverse mapping from 2D-to-3D outperformed NNs by up to 47% in the case of cube data mapping. This thesis is also concerned with the operation and training of NNs in limited precision specifically on Field Programmable Gate Arrays (FPGAs). Our findings demonstrate the feasibility of on-line, real-time, low-latency training on limited precision electronic hardware such as Digital Signal Processors (DSPs) and FPGAs. This thesis also investigated the e�ffects of limited precision on the Back Propagation (BP) and Levenberg-Marquardt (LM) optimisation algorithms. Two new HONNs are compared against NNs for estimating the discrete XOR function and an optical waveguide sidewall roughness dataset in order to find the Minimum Precision for Lowest Error (MPLE) at which the training and operation are still possible. The new findings show that compared to NNs, HONNs require more precision to reach a similar performance level, and that the 2nd order LM algorithm requires at least 24 bits of precision. The final investigation implemented and demonstrated the LM algorithm on Field Programmable Gate Arrays (FPGAs) for the first time in our knowledge. It was used to train a Neural Network, and the estimation of camera calibration parameters. The LM algorithm approximated NN to model the XOR function in only 13 iterations from zero initial conditions with a speed-up in excess of 3 x 10^6 compared to an implementation in software. Camera calibration was also demonstrated on FPGAs; compared to the software implementation, the FPGA implementation led to an increase in the mean squared error and standard deviation of only 17.94% and 8.04% respectively, but the FPGA increased the calibration speed by a factor of 1:41 x 106

    Higher order neural networks for financial time series prediction

    No full text
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore