579 research outputs found

    Modeling Financial Time Series with Artificial Neural Networks

    Full text link
    Financial time series convey the decisions and actions of a population of human actors over time. Econometric and regressive models have been developed in the past decades for analyzing these time series. More recently, biologically inspired artificial neural network models have been shown to overcome some of the main challenges of traditional techniques by better exploiting the non-linear, non-stationary, and oscillatory nature of noisy, chaotic human interactions. This review paper explores the options, benefits, and weaknesses of the various forms of artificial neural networks as compared with regression techniques in the field of financial time series analysis.CELEST, a National Science Foundation Science of Learning Center (SBE-0354378); SyNAPSE program of the Defense Advanced Research Project Agency (HR001109-03-0001

    Incremental construction of LSTM recurrent neural network

    Get PDF
    Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and fully connected hidden layers as well as two different levels of freezing previous weights in the cascade case. GLSTM has been applied to a forecasting problem in a biomedical domain, where the input/output behavior of five controllers of the Central Nervous System control has to be modelled. We have compared growing LSTM results against other neural networks approaches, and our work applying conventional LSTM to the task at hand.Postprint (published version

    Memetic cooperative coevolution of Elman recurrent neural networks

    Get PDF
    Cooperative coevolution decomposes an optimi- sation problem into subcomponents and collectively solves them using evolutionary algorithms. Memetic algorithms provides enhancement to evolutionary algorithms with local search. Recently, the incorporation of local search into a memetic cooperative coevolution method has shown to be efficient for training feedforward networks on pattern classification problems. This paper applies the memetic cooperative coevolution method for training recurrent neural networks on grammatical inference problems. The results show that the proposed method achieves better performance in terms of optimisation time and robustness

    A Minimal Architecture for General Cognition

    Full text link
    A minimalistic cognitive architecture called MANIC is presented. The MANIC architecture requires only three function approximating models, and one state machine. Even with so few major components, it is theoretically sufficient to achieve functional equivalence with all other cognitive architectures, and can be practically trained. Instead of seeking to transfer architectural inspiration from biology into artificial intelligence, MANIC seeks to minimize novelty and follow the most well-established constructs that have evolved within various sub-fields of data science. From this perspective, MANIC offers an alternate approach to a long-standing objective of artificial intelligence. This paper provides a theoretical analysis of the MANIC architecture.Comment: 8 pages, 8 figures, conference, Proceedings of the 2015 International Joint Conference on Neural Network

    A hybrid model-based and memory-based short-term traffic prediction system

    Get PDF
    Short-term traffic forecasting capabilities on freeways and major arterials have received special attention in the past decade due primarily to their vital role in supporting various travelers\u27 trip decisions and traffic management functions. This research presents a hybrid model-based and memory-based methodology to improve freeway traffic prediction performance. The proposed methodology integrates both approaches to strengthen predictions under both recurrent and non-recurrent conditions. The model-based approach relies on a combination of static and dynamic neural network architectures to achieve optimal prediction performance under various input and traffic condition settings. Concurrently, the memory-based component is derived from the data archival system that encodes the commuters\u27 travel experience in the past. The outcomes of the two approaches are two prediction values for each query case. The two values are subsequently processed by a prediction query manager, which ultimately produces one final prediction value using an error-based decision algorithm. It was found that the hybrid approach produces speed estimates with smaller errors than if the two approaches employed separately. The proposed prediction approach could be used in deriving travel times more reliable as the Traffic Management Centers move towards implementing Advanced Traveler Information Systems (ATIS) applications

    NASA JSC neural network survey results

    Get PDF
    A survey of Artificial Neural Systems in support of NASA's (Johnson Space Center) Automatic Perception for Mission Planning and Flight Control Research Program was conducted. Several of the world's leading researchers contributed papers containing their most recent results on artificial neural systems. These papers were broken into categories and descriptive accounts of the results make up a large part of this report. Also included is material on sources of information on artificial neural systems such as books, technical reports, software tools, etc

    Comparison of Three Intelligent Techniques for Runoff Simulation

    Get PDF
    In this study, performance of a feedback neural network, Elman, is evaluated for runoff simulation. The model ability is compared with two other intelligent models namely, standalone feedforward Multi-layer Perceptron (MLP) neural network model and hybrid Adaptive Neuro-Fuzzy Inference System (ANFIS) model. In this case, daily runoff data during monsoon period in a catchment located at south India were collected. Three statistical criteria, correlation coefficient, coefficient of efficiency and the difference of slope of a best-fit line from observed-estimated scatter plots to 1:1 line, were applied for comparing the performances of the models. The results showed that ANFIS technique provided significant improvement as compared to Elman and MLP models. ANFIS could be an efficient alternative to artificial neural networks, a computationally intensive method, for runoff predictions providing at least comparable accuracy. Comparing two neural networks indicated that, unexpectedly, Elman technique has high ability than MLP, which is a powerful model in simulation of hydrological processes, in runoff modeling

    Biogeography-Based Optimization for Weight Optimization in Elman Neural Network Compared with Meta-Heuristics Methods

    Get PDF
    In this paper, we present a learning algorithm for the Elman Recurrent Neural Network (ERNN) based on Biogeography-Based Optimization (BBO). The proposed algorithm computes the weights, initials inputs of the context units and self-feedback coefficient of the Elman network. The method applied for four benchmark problems: Mackey Glass and Lorentz equations, which produce chaotic time series, and to real life classification; iris and Breast Cancer datasets. Numerical experimental results show improvement of the performance of the proposed algorithm in terms of accuracy and MSE eror over many heuristic algorithms

    PROPOSED METHODOLOGY FOR OPTIMIZING THE TRAINING PARAMETERS OF A MULTILAYER FEED-FORWARD ARTIFICIAL NEURAL NETWORKS USING A GENETIC ALGORITHM

    Get PDF
    An artificial neural network (ANN), or shortly "neural network" (NN), is a powerful mathematical or computational model that is inspired by the structure and/or functional characteristics of biological neural networks. Despite the fact that ANN has been developing rapidly for many years, there are still some challenges concerning the development of an ANN model that performs effectively for the problem at hand. ANN can be categorized into three main types: single layer, recurrent network and multilayer feed-forward network. In multilayer feed-forward ANN, the actual performance is highly dependent on the selection of architecture and training parameters. However, a systematic method for optimizing these parameters is still an active research area. This work focuses on multilayer feed-forward ANNs due to their generalization capability, simplicity from the viewpoint of structure, and ease of mathematical analysis. Even though, several rules for the optimization of multilayer feed-forward ANN parameters are available in the literature, most networks are still calibrated via a trial-and-error procedure, which depends mainly on the type of problem, and past experience and intuition of the expert. To overcome these limitations, there have been attempts to use genetic algorithm (GA) to optimize some of these parameters. However most, if not all, of the existing approaches are focused partially on the part of architecture and training parameters. On the contrary, the GAANN approach presented here has covered most aspects of multilayer feed-forward ANN in a more comprehensive way. This research focuses on the use of binaryencoded genetic algorithm (GA) to implement efficient search strategies for the optimal architecture and training parameters of a multilayer feed-forward ANN. Particularly, GA is utilized to determine the optimal number of hidden layers, number of neurons in each hidden layer, type of training algorithm, type of activation function of hidden and output neurons, initial weight, learning rate, momentum term, and epoch size of a multilayer feed-forward ANN. In this thesis, the approach has been analyzed and algorithms that simulate the new approach have been mapped out

    Automated Feature Engineering for Deep Neural Networks with Genetic Programming

    Get PDF
    Feature engineering is a process that augments the feature vector of a machine learning model with calculated values that are designed to enhance the accuracy of a model’s predictions. Research has shown that the accuracy of models such as deep neural networks, support vector machines, and tree/forest-based algorithms sometimes benefit from feature engineering. Expressions that combine one or more of the original features usually create these engineered features. The choice of the exact structure of an engineered feature is dependent on the type of machine learning model in use. Previous research demonstrated that various model families benefit from different types of engineered feature. Random forests, gradient-boosting machines, or other tree-based models might not see the same accuracy gain that an engineered feature allowed neural networks, generalized linear models, or other dot-product based models to achieve on the same data set. This dissertation presents a genetic programming-based algorithm that automatically engineers features that increase the accuracy of deep neural networks for some data sets. For a genetic programming algorithm to be effective, it must prioritize the search space and efficiently evaluate what it finds. This dissertation algorithm faced a potential search space composed of all possible mathematical combinations of the original feature vector. Five experiments were designed to guide the search process to efficiently evolve good engineered features. The result of this dissertation is an automated feature engineering (AFE) algorithm that is computationally efficient, even though a neural network is used to evaluate each candidate feature. This approach gave the algorithm a greater opportunity to specifically target deep neural networks in its search for engineered features that improve accuracy. Finally, a sixth experiment empirically demonstrated the degree to which this algorithm improved the accuracy of neural networks on data sets augmented by the algorithm’s engineered features
    • …
    corecore