786 research outputs found

    Forecasting Stock Exchange Data using Group Method of Data Handling Neural Network Approach

    Get PDF
    The increasing uncertainty of the natural world has motivated computer scientists to seek out the best approach to technological problems. Nature-inspired problem-solving approaches include meta-heuristic methods that are focused on evolutionary computation and swarm intelligence. One of these problems significantly impacting information is forecasting exchange index, which is a serious concern with the growth and decline of stock as there are many reports on loss of financial resources or profitability. When the exchange includes an extensive set of diverse stock, particular concepts and mechanisms for physical security, network security, encryption, and permissions should guarantee and predict its future needs. This study aimed to show it is efficient to use the group method of data handling (GMDH)-type neural networks and their application for the classification of numerical results. Such modeling serves to display the precision of GMDH-type neural networks. Following the US withdrawal from the Joint Comprehensive Plan of Action in April 2018, the behavior of the stock exchange data stream and commend algorithms has not been able to predict correctly and fit in the network satisfactorily. This paper demonstrated that Group Method Data Handling is most likely to improve inductive self-organizing approaches for addressing realistic severe problems such as the Iranian financial market crisis. A new trajectory would be used to verify the consistency of the obtained equations hence the models' validity

    Modelling commodity value at risk with Psi Sigma neural networks using open–high–low–close data

    Get PDF
    The motivation for this paper is to investigate the use of a promising class of neural network models, Psi Sigma (PSI), when applied to the task of forecasting the one-day ahead value at risk (VaR) of the oil Brent and gold bullion series using open–high–low–close data. In order to benchmark our results, we also consider VaR forecasts from two different neural network designs, the multilayer perceptron and the recurrent neural network, a genetic programming algorithm, an extreme value theory model along with some traditional techniques such as an ARMA-Glosten, Jagannathan, and Runkle (1,1) model and the RiskMetrics volatility. The forecasting performance of all models for computing the VaR of the Brent oil and the gold bullion is examined over the period September 2001–August 2010 using the last year and half of data for out-of-sample testing. The evaluation of our models is done by using a series of backtesting algorithms such as the Christoffersen tests, the violation ratio and our proposed loss function that considers not only the number of violations but also their magnitude. Our results show that the PSI outperforms all other models in forecasting the VaR of gold and oil at both the 5% and 1% confidence levels, providing an accurate number of independent violations with small magnitude

    A Dynamic Neural Network Architecture with immunology Inspired Optimization for Weather Data Forecasting

    Get PDF
    Recurrent neural networks are dynamical systems that provide for memory capabilities to recall past behaviour, which is necessary in the prediction of time series. In this paper, a novel neural network architecture inspired by the immune algorithm is presented and used in the forecasting of naturally occurring signals, including weather big data signals. Big Data Analysis is a major research frontier, which attracts extensive attention from academia, industry and government, particularly in the context of handling issues related to complex dynamics due to changing weather conditions. Recently, extensive deployment of IoT, sensors, and ambient intelligence systems led to an exponential growth of data in the climate domain. In this study, we concentrate on the analysis of big weather data by using the Dynamic Self Organized Neural Network Inspired by the Immune Algorithm. The learning strategy of the network focuses on the local properties of the signal using a self-organised hidden layer inspired by the immune algorithm, while the recurrent links of the network aim at recalling previously observed signal patterns. The proposed network exhibits improved performance when compared to the feedforward multilayer neural network and state-of-the-art recurrent networks, e.g., the Elman and the Jordan networks. Three non-linear and non-stationary weather signals are used in our experiments. Firstly, the signals are transformed into stationary, followed by 5-steps ahead prediction. Improvements in the prediction results are observed with respect to the mean value of the error (RMS) and the signal to noise ratio (SNR), however to the expense of additional computational complexity, due to presence of recurrent links

    DYNAMIC SELF-ORGANISED NEURAL NETWORK INSPIRED BY THE IMMUNE ALGORITHM FOR FINANCIAL TIME SERIES PREDICTION AND MEDICAL DATA CLASSIFICATION

    Get PDF
    Artificial neural networks have been proposed as useful tools in time series analysis in a variety of applications. They are capable of providing good solutions for a variety of problems, including classification and prediction. However, for time series analysis, it must be taken into account that the variables of data are related to the time dimension and are highly correlated. The main aim of this research work is to investigate and develop efficient dynamic neural networks in order to deal with data analysis issues. This research work proposes a novel dynamic self-organised multilayer neural network based on the immune algorithm for financial time series prediction and biomedical signal classification, combining the properties of both recurrent and self-organised neural networks. The first case study that has been addressed in this thesis is prediction of financial time series. The financial time series signal is in the form of historical prices of different companies. The future prediction of price in financial time series enables businesses to make profits by predicting or simply guessing these prices based on some historical data. However, the financial time series signal exhibits a highly random behaviour, which is non-stationary and nonlinear in nature. Therefore, the prediction of this type of time series is very challenging. In this thesis, a number of experiments have been simulated to evaluate the ability of the designed recurrent neural network to forecast the future value of financial time series. The resulting forecast made by the proposed network shows substantial profits on financial historical signals when compared to the self-organised hidden layer inspired by immune algorithm and multilayer perceptron neural networks. These results suggest that the proposed dynamic neural networks has a better ability to capture the chaotic movement in financial signals. The second case that has been addressed in this thesis is for predicting preterm birth and diagnosing preterm labour. One of the most challenging tasks currently facing the healthcare community is the identification of preterm labour, which has important significances for both healthcare and the economy. Premature birth occurs when the baby is born before completion of the 37-week gestation period. Incomplete understanding of the physiology of the uterus and parturition means that premature labour prediction is a difficult task. The early prediction of preterm births could help to improve prevention, through appropriate medical and lifestyle interventions. One promising method is the use of Electrohysterography. This method records the uterine electrical activity during pregnancy. In this thesis, the proposed dynamic neural network has been used for classifying between term and preterm labour using uterine signals. The results indicated that the proposed network generated improved classification accuracy in comparison to the benchmarked neural network architectures

    Sparse Learning for Variable Selection with Structures and Nonlinearities

    Full text link
    In this thesis we discuss machine learning methods performing automated variable selection for learning sparse predictive models. There are multiple reasons for promoting sparsity in the predictive models. By relying on a limited set of input variables the models naturally counteract the overfitting problem ubiquitous in learning from finite sets of training points. Sparse models are cheaper to use for predictions, they usually require lower computational resources and by relying on smaller sets of inputs can possibly reduce costs for data collection and storage. Sparse models can also contribute to better understanding of the investigated phenomenons as they are easier to interpret than full models.Comment: PhD thesi
    • …
    corecore