6,533 research outputs found

    On the selection of hidden neurons with heuristic search strategies for approximation

    Get PDF
    Feature Selection techniques usually follow some search strategy to select a suitable subset from a set of features. Most neural network growing algorithms perform a search with Forward Selection with the objective of finding a reasonably good subset of neurons. Using this link between both fields (feature selection and neuron selection), we propose and analyze different algorithms for the construction of neural networks based on heuristic search strategies coming from the feature selection field. The results of an experimental comparison to Forward Selection using both synthetic and real data show that a much better approximation can be achieved, though at the expense of a higher computational cost.Peer ReviewedPostprint (published version

    Learning backward induction: a neural network agent approach

    Get PDF
    This paper addresses the question of whether neural networks (NNs), a realistic cognitive model of human information processing, can learn to backward induce in a two-stage game with a unique subgame-perfect Nash equilibrium. The NNs were found to predict the Nash equilibrium approximately 70% of the time in new games. Similarly to humans, the neural network agents are also found to suffer from subgame and truncation inconsistency, supporting the contention that they are appropriate models of general learning in humans. The agents were found to behave in a bounded rational manner as a result of the endogenous emergence of decision heuristics. In particular a very simple heuristic socialmax, that chooses the cell with the highest social payoff explains their behavior approximately 60% of the time, whereas the ownmax heuristic that simply chooses the cell with the maximum payoff for that agent fares worse explaining behavior roughly 38%, albeit still significantly better than chance. These two heuristics were found to be ecologically valid for the backward induction problem as they predicted the Nash equilibrium in 67% and 50% of the games respectively. Compared to various standard classification algorithms, the NNs were found to be only slightly more accurate than standard discriminant analyses. However, the latter do not model the dynamic learning process and have an ad hoc postulated functional form. In contrast, a NN agent’s behavior evolves with experience and is capable of taking on any functional form according to the universal approximation theorem.

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Can the human mind learn to backward induce? A neural network answer.

    Get PDF
    This paper addresses the question of whether neural networks, a realistic cognitive model of the human information processing, can learn to backward induce in a two stage game with a unique subgame-perfect Nash Equilibrium. The result that the neural networks only learn a heuristic that approximates the desired output and does not backward induce is in accordance with the documented difficulty of humans to apply backward induction and their dependence on heuristics.behavioral game theory; neural networks; learning

    Predicting Phishing Websites using Neural Network trained with Back-Propagation

    Get PDF
    Phishing is increasing dramatically with the development of modern technologies and the global worldwide computer networks. This results in the loss of customer’s confidence in e-commerce and online banking, financial damages, and identity theft. Phishing is fraudulent effort aims to acquire sensitive information from users such as credit card credentials, and social security number. In this article, we propose a model for predicting phishing attacks based on Artificial Neural Network (ANN). A Feed Forward Neural Network trained by Back Propagation algorithm is developed to classify websites as phishing or legitimate. The suggested model shows high acceptance ability for noisy data, fault tolerance and high prediction accuracy with respect to false positive and false negative rates

    Forecasting foreign exchange rates with adaptive neural networks using radial basis functions and particle swarm optimization

    Get PDF
    The motivation for this paper is to introduce a hybrid Neural Network architecture of Particle Swarm Optimization and Adaptive Radial Basis Function (ARBF-PSO), a time varying leverage trading strategy based on Glosten, Jagannathan and Runkle (GJR) volatility forecasts and a Neural Network fitness function for financial forecasting purposes. This is done by benchmarking the ARBF-PSO results with those of three different Neural Networks architectures, a Nearest Neighbors algorithm (k-NN), an autoregressive moving average model (ARMA), a moving average convergence/divergence model (MACD) plus a naĂŻve strategy. More specifically, the trading and statistical performance of all models is investigated in a forecast simulation of the EUR/USD, EUR/GBP and EUR/JPY ECB exchange rate fixing time series over the period January 1999 to March 2011 using the last two years for out-of-sample testing
    • 

    corecore