16,425 research outputs found

    Stock Market Prediction Using Evolutionary Support Vector Machines: An Application To The ASE20 Index

    Get PDF
    The main motivation for this paper is to introduce a novel hybrid method for the prediction of the directional movement of financial assets with an application to the ASE20 Greek stock index. Specifically, we use an alternative computational methodology named evolutionary support vector machine (ESVM) stock predictor for modeling and trading the ASE20 Greek stock index extending the universe of the examined inputs to include autoregressive inputs and moving averages of the ASE20 index and other four financial indices. The proposed hybrid method consists of a combination of genetic algorithms with support vector machines modified to uncover effective short-term trading models and overcome the limitations of existing methods. For comparison purposes, the trading performance of the ESVM stock predictor is benchmarked with four traditional strategies (a naïve strategy, a buy and hold strategy, a moving average convergence/divergence and an autoregressive moving average model), and a multilayer perceptron neural network model. As it turns out, the proposed methodology produces a higher trading performance, even during the financial crisis period, in terms of annualized return and information ratio, while providing information about the relationship between the ASE20 index and DAX30, NIKKEI225, FTSE100 and S&P500 indices

    The Market Fraction Hypothesis under different GP algorithms

    Get PDF
    In a previous work, inspired by observations made in many agent-based financial models, we formulated and presented the Market Fraction Hypothesis, which basically predicts a short duration for any dominant type of agents, but then a uniform distribution over all types in the long run. We then proposed a two-step approach, a rule-inference step and a rule-clustering step, to testing this hypothesis. We employed genetic programming as the rule inference engine, and applied self-organizing maps to cluster the inferred rules. We then ran tests for 10 international markets and provided a general examination of the plausibility of the hypothesis. However, because of the fact that the tests took place under a GP system, it could be argued that these results are dependent on the nature of the GP algorithm. This chapter thus serves as an extension to our previous work. We test the Market Fraction Hypothesis under two new different GP algorithms, in order to prove that the previous results are rigorous and are not sensitive to the choice of GP. We thus test again the hypothesis under the same 10 empirical datasets that were used in our previous experiments. Our work shows that certain parts of the hypothesis are indeed sensitive on the algorithm. Nevertheless, this sensitivity does not apply to all aspects of our tests. This therefore allows us to conclude that our previously derived results are rigorous and can thus be generalized

    OneMax in Black-Box Models with Several Restrictions

    Full text link
    Black-box complexity studies lower bounds for the efficiency of general-purpose black-box optimization algorithms such as evolutionary algorithms and other search heuristics. Different models exist, each one being designed to analyze a different aspect of typical heuristics such as the memory size or the variation operators in use. While most of the previous works focus on one particular such aspect, we consider in this work how the combination of several algorithmic restrictions influence the black-box complexity. Our testbed are so-called OneMax functions, a classical set of test functions that is intimately related to classic coin-weighing problems and to the board game Mastermind. We analyze in particular the combined memory-restricted ranking-based black-box complexity of OneMax for different memory sizes. While its isolated memory-restricted as well as its ranking-based black-box complexity for bit strings of length nn is only of order n/lognn/\log n, the combined model does not allow for algorithms being faster than linear in nn, as can be seen by standard information-theoretic considerations. We show that this linear bound is indeed asymptotically tight. Similar results are obtained for other memory- and offspring-sizes. Our results also apply to the (Monte Carlo) complexity of OneMax in the recently introduced elitist model, in which only the best-so-far solution can be kept in the memory. Finally, we also provide improved lower bounds for the complexity of OneMax in the regarded models. Our result enlivens the quest for natural evolutionary algorithms optimizing OneMax in o(nlogn)o(n \log n) iterations.Comment: This is the full version of a paper accepted to GECCO 201

    Agent-Based Models and Human Subject Experiments

    Get PDF
    This paper considers the relationship between agent-based modeling and economic decision-making experiments with human subjects. Both approaches exploit controlled ``laboratory'' conditions as a means of isolating the sources of aggregate phenomena. Research findings from laboratory studies of human subject behavior have inspired studies using artificial agents in ``computational laboratories'' and vice versa. In certain cases, both methods have been used to examine the same phenomenon. The focus of this paper is on the empirical validity of agent-based modeling approaches in terms of explaining data from human subject experiments. We also point out synergies between the two methodologies that have been exploited as well as promising new possibilities.agent-based models, human subject experiments, zero- intelligence agents, learning, evolutionary algorithms

    Learning Algorithms in a Decentralized General Equilibrium Model

    Get PDF
    A model is developed in which economic agents learn to make price-setting, price-response, and resource allocation decisions in decentralized markets where all information and interaction is local. Computer simulation shows that it is possible for agents to act almost as if they had the additional necessary information to define and solve a standard optimization problem. Their behaviour gives rise endogenously to phenomena resembling Adam Smith's invisible hand. The results also indicate that agents must engage in some form of price comparison for decentralized markets to clear--otherwise there is no incentive for firms to respond to excess supply by lowering prices. This suggests that agent-based models with decentralized interaction risk untenable results if price-response decisions are made without being first directed toward the most favourable local price.
    corecore