561 research outputs found

    An Evolutionary Approach to Multistage Portfolio Optimization

    No full text
    Portfolio optimization is an important problem in quantitative finance due to its application in asset management and corporate financial decision making. This involves quantitatively selecting the optimal portfolio for an investor given their asset return distribution assumptions, investment objectives and constraints. Analytical portfolio optimization methods suffer from limitations in terms of the problem specification and modelling assumptions that can be used. Therefore, a heuristic approach is taken where Monte Carlo simulations generate the investment scenarios and' a problem specific evolutionary algorithm is used to find the optimal portfolio asset allocations. Asset allocation is known to be the most important determinant of a portfolio's investment performance and also affects its risk/return characteristics. The inclusion of equity options in an equity portfolio should enable an investor to improve their efficient frontier due to options having a nonlinear payoff. Therefore, a research area of significant importance to equity investors, in which little research has been carried out, is the optimal asset allocation in equity options for an equity investor. A purpose of my thesis is to carry out an original analysis of the impact of allowing the purchase of put options and/or sale of call options for an equity investor. An investigation is also carried out into the effect ofchanging the investor's risk measure on the optimal asset allocation. A dynamic investment strategy obtained through multistage portfolio optimization has the potential to result in a superior investment strategy to that obtained from a single period portfolio optimization. Therefore, a novel analysis of the degree of the benefits of a dynamic investment strategy for an equity portfolio is performed. In particular, the ability of a dynamic investment strategy to mimic the effects ofthe inclusion ofequity options in an equity portfolio is investigated. The portfolio optimization problem is solved using evolutionary algorithms, due to their ability incorporate methods from a wide range of heuristic algorithms. Initially, it is shown how the problem specific parts ofmy evolutionary algorithm have been designed to solve my original portfolio optimization problem. Due to developments in evolutionary algorithms and the variety of design structures possible, a purpose of my thesis is to investigate the suitability of alternative algorithm design structures. A comparison is made of the performance of two existing algorithms, firstly the single objective stepping stone island model, where each island represents a different risk aversion parameter, and secondly the multi-objective Non-Dominated Sorting Genetic Algorithm2. Innovative hybrids of these algorithms which also incorporate features from multi-objective evolutionary algorithms, multiple population models and local search heuristics are then proposed. . A novel way is developed for solving the portfolio optimization by dividing my problem solution into two parts and then applying a multi-objective cooperative coevolution evolutionary algorithm. The first solution part consists of the asset allocation weights within the equity portfolio while the second solution part consists 'ofthe asset allocation weights within the equity options and the asset allocation weights between the different asset classes. An original portfolio optimization multiobjective evolutionary algorithm that uses an island model to represent different risk measures is also proposed.Imperial Users onl

    The History of the Quantitative Methods in Finance Conference Series. 1992-2007

    Get PDF
    This report charts the history of the Quantitative Methods in Finance (QMF) conference from its beginning in 1993 to the 15th conference in 2007. It lists alphabetically the 1037 speakers who presented at all 15 conferences and the titles of their papers.

    Multi-objective optimisation under deep uncertainty

    Get PDF
    Most of the decisions in real-life problems need to be made in the absence of complete knowledge about the consequences of the decision. Furthermore, in some of these problems, the probability and/or the number of different outcomes are also unknown (named deep uncertainty). Therefore, all the probability-based approaches (such as stochastic programming) are unable to address these problems. On the other hand, involving various stakeholders with different (possibly conflicting) criteria in the problems brings additional complexity. The main aim and primary motivation for writing this thesis have been to deal with deep uncertainty in Multi-Criteria Decision-Making (MCDM) problems, especially with long-term decision-making processes such as strategic planning problems. To achieve these aims, we first introduced a two-stage scenario-based structure for dealing with deep uncertainty in Multi-Objective Optimisation (MOO)/MCDM problems. The proposed method extends the concept of two-stage stochastic programming with recourse to address the capability of dealing with deep uncertainty through the use of scenario planning rather than statistical expectation. In this research, scenarios are used as a dimension of preference (a component of what we term the meta-criteria) to avoid problems relating to the assessment and use of probabilities under deep uncertainty. Such scenario-based thinking involved a multi-objective representation of performance under different future conditions as an alternative to expectation, which fitted naturally into the broader multi-objective problem context. To aggregate these objectives of the problem, the Generalised Goal Programming (GGP) approach is used. Due to the capability of this approach to handle large numbers of objective functions/criteria, the GGP is significantly useful in the proposed framework. Identifying the goals for each criterion is the only action that the Decision Maker (DM) needs to take without needing to investigate the trade-offs between different criteria. Moreover, the proposed two-stage framework has been expanded to a three-stage structure and a moving horizon concept to handle the existing deep uncertainty in more complex problems, such as strategic planning. As strategic planning problems will deal with more than two stages and real processes are continuous, it follows that more scenarios will continuously be unfolded that may or may not be periodic. "Stages", in this study, are artificial constructs to structure thinking of an indefinite future. A suitable length of the planning window and stages in the proposed methodology are also investigated. Philosophically, the proposed two-stage structure always plans and looks one step ahead while the three-stage structure considers the conditions and consequences of two upcoming steps in advance, which fits well with our primary objective. Ignoring long-term consequences of decisions as well as likely conditions could not be a robust strategic approach. Therefore, generally, by utilising the three-stage structure, we may expect a more robust decision than with a two-stage representation. Modelling time preferences in multi-stage problems have also been introduced to solve the fundamental problem of comparability of the two proposed methodologies because of the different time horizon, as the two-stage model is ignorant of the third stage. This concept has been applied by a differential weighting in models. Importance weights, then, are primarily used to make the two- and three-stage models more directly comparable, and only secondarily as a measure of risk preference. Differential weighting can help us apply further preferences in the model and lead it to generate more preferred solutions. Expanding the proposed structure to the problems with more than three stages which usually have too many meta-scenarios may lead us to a computationally expensive model that cannot easily be solved, if it all. Moreover, extension to a planning horizon that too long will not result in an exact plan, as nothing in nature is predictable to this level of detail, and we are always surprised by new events. Therefore, beyond the expensive computation in a multi-stage structure for more than three stages, defining plausible scenarios for far stages is not logical and even impossible. Therefore, the moving horizon models in a T-stage planning window has been introduced. To be able to run and evaluate the proposed two- and three-stage moving horizon frameworks in longer planning horizons, we need to identify all plausible meta-scenarios. However, with the assumption of deep uncertainty, this identification is almost impossible. On the other hand, even with a finite set of plausible meta-scenarios, comparing and computing the results in all plausible meta-scenarios are hardly possible, because the size of the model grows exponentially by raising the length of the planning horizon. Furthermore, analysis of the solutions requires hundreds or thousands of multi-objective comparisons that are not easily conceivable, if it all. These issues motivated us to perform a Simulation-Optimisation study to simulate the reasonable number of meta-scenarios and enable evaluation, comparison and analysis of the proposed methods for the problems with a T-stage planning horizon. In this Simulation-Optimisation study, we started by setting the current scenario, the scenario that we were facing it at the beginning of the period. Then, the optimisation model was run to get the first-stage decisions which can implement immediately. Thereafter, the next scenario was randomly generated by using Monte Carlo simulation methods. In deep uncertainty, we do not have enough knowledge about the likelihood of plausible scenarios nor the probability space; therefore, to simulate the deep uncertainty we shall not use anything of scenario likelihoods in the decision models. The two- and three-stage Simulation-Optimisation algorithms were also proposed. A comparison of these algorithms showed that the solutions to the two-stage moving horizon model are feasible to the other pattern (three-stage). Also, the optimal solution to the three-stage moving horizon model is not dominated by any solutions of the other model. So, with no doubt, it must find better, or at least the same, goal achievement compared to the two-stage moving horizon model. Accordingly, the three-stage moving horizon model evaluates and compares the optimal solution of the corresponding two-stage moving horizon model to the other feasible solutions, then, if it selects anything else it must either be better in goal achievement or be robust in some future scenarios or a combination of both. However, the cost of these supremacies must be considered (as it may lead us to a computationally expensive problem), and the efficiency of applying this structure needs to be approved. Obviously, using the three-stage structure in comparison with the two-stage approach brings more complexity and calculations to the models. It is also shown that the solutions to the three-stage model would be preferred to the solutions provided by the two-stage model under most circumstances. However, by the "efficiency" of the three-stage framework in our context, we want to know that whether utilising this approach and its solutions is worth the expense of the additional complexity and computation. The experiments in this study showed that the three-stage model has advantages under most circumstances(meta-scenarios), but that the gains are quite modest. This issue is frequently observed when comparing these methods in problems with a short-term (say less than five stages) planning window. Nevertheless, analysis of the length of the planning horizon and its effects on the solutions to the proposed frameworks indicate that utilising the three-stage models is more efficient for longer periods because the differences between the solutions of the two proposed structures increase by any iteration of the algorithms in moving horizon models. Moreover, during the long-term calculations, we noticed that the two-stage algorithm failed to find the optimal solutions for some iterations while the three-stage algorithm found the optimal value in all cases. Thus, it seems that for the planning horizons with more than ten stages, the efficiency of the three-stage model be may worth the expenses of the complexity and computation. Nevertheless, if the DM prefers to not use the three-stage structure because of the complexity and/or calculations, the two-stage moving horizon model can provide us with some reasonable solutions, although they might not be as good as the solutions generated by a three-stage framework. Finally, to examine the power of the proposed methodology in real cases, the proposed two-stage structure was applied in the sugarcane industry to analyse the whole infrastructure of the sugar and bioethanol Supply Chain (SC) in such a way that all economics (Max profit), environmental (Min COâ‚‚), and social benefits (Max job-creations) were optimised under six key uncertainties, namely sugarcane yield, ethanol and refined sugar demands and prices, and the exchange rate. Moreover, one of the critical design questions - that is, to design the optimal number and technologies as well as the best place(s) for setting up the ethanol plant(s) - was also addressed in this study. The general model for the strategic planning of sugar- bioethanol supply chains (SC) under deep uncertainty was formulated and also examined in a case study based on the South African Sugar Industry. This problem is formulated as a Scenario-Based Mixed-Integer Two-Stage Multi-Objective Optimisation problem and solved by utilising the Generalised Goal Programming Approach. To sum up, the proposed methodology is, to the best of our knowledge, a novel approach that can successfully handle the deep uncertainty in MCDM/MOO problems with both short- and long-term planning horizons. It is generic enough to use in all MCDM problems under deep uncertainty. However, in this thesis, the proposed structure only applied in Linear Problems (LP). Non-linear problems would be an important direction for future research. Different solution methods may also need to be examined to solve the non-linear problems. Moreover, many other real-world optimisation and decision-making applications can be considered to examine the proposed method in the future

    Behavioral Finance and Agent-Based Artificial Markets

    Get PDF
    Studying the behavior of market participants is important due to its potential impact on asset prices and the dynamics of financial markets. The idea of individual investors who are prone to biases in judgment and who use various heuristics, which might lead to anomalies on the market level, has been explored within the field of behavioral finance. In this dissertation, we analyze market-wise implications of investor behavior and their irrationalities by means of agent-based simulations of financial markets. The usefulness of agent-based artificial markets for studying the behavioral finance topics stems from their ability to relate the micro-level behavior of individual market participants (represented as agents) and the macro-level behavior of the market (artificial time-series). This micro-macro mapping of agent-based methodology is particularly useful for behavioral finance, because that link is often broken when using other methodological approaches. In this thesis, we study various biases commented in the behavioral finance literature and propose novel models for some of the behavioral phenomena. We provide mathematical definitions and computational implementations for overconfidence (miscalibration and better-than-average effect), investor sentiment (optimism and pessimism), biased self-attribution, loss aversion, and recency and primacy effects. The levels of these behavioral biases are related to the features of the market dynamics, such as the bubbles and crashes, and the excess volatility of the market price. The impact of behavioral biases on investor performance is also studied

    The evolution and dynamics of stocks on the Johannesburg Securities Exchange and their implications for equity investment management

    Get PDF
    [No subject] This thesis explores the dynamics of the Johannesburg Stock Exchange returns to understand how they impact stock prices. The introductory chapter renders a brief overview of financial markets in general and the Johannesburg Securities Exchange (JSE) in particular. The second chapter employs the fractal analysis technique, a method for estimating the Hurst exponent, to examine the JSE indices. The results suggest that the JSE is fractal in nature, implying a long-term predictability property. The results also indicate a logical system of variation of the Hurst exponent by firm size, market characteristics and sector grouping. The third chapter investigates the economic and political events that affect different market sectors and how they are implicated in the structural dynamics of the JSE. It provides some insights into the degree of sensitivity of different market sectors to positive and negative news. The findings demonstrate transient episodes of nonlinearity that can be attributed to economic events and the state of the market. Chapter 4 looks at the evolution of risk measurement and the distribution of returns on the JSE. There is evidence of fat tails and that the Student t-distribution is a better fit for the JSE returns than the Normal distribution. The Gaussian based Value-at-Risk model also proved to be an ineffective risk measurement tool under high market volatility. In Chapter 5 simulations are used to investigate how different agent interactions affect market dynamics. The results show that it is possible for traders to switch between trading strategies and this evolutionary switching of strategies is dependent on the state of the market. Chapter 6 shows the extent to which endogeneity affects price formation. To explore this relationship, the Poisson Hawkes model, which combines exogenous influences with self-excited dynamics, is employed. Evidence suggests that the level of endogeneity has been increasing rapidly over the past decade. This implies that there is an increasing influence of internal dynamics on price formation. The findings also demonstrate that market crashes are caused by endogenous dynamics and exogenous shocks merely act as catalysts. Chapter 7 presents the hybrid adaptive intelligent model for financial time series prediction. Given evidence of non-linearity, heterogeneous agents and the fractal nature of the JSE market, neural networks, fuzzy logic and fractal theory are combined, to obtain a hybrid adaptive intelligent model. The proposed system outperformed traditional models

    Testing The Adaptive Efficiency Of U.S. Stock Markets: A Genetic Programming Approach

    Get PDF
    Genetic programming is employed to develop trading rules, which are applied to test the efficient market hypothesis. Most previous tests of the efficient market hypothesis were limited to trading rules that returned simple buy-sell signals. The broader approach taken here, developed under a framework consistent with the standard portfolio model, allows use of trading rules that are defined as the proportion of an investor’s total wealth invested into the risky asset (rather than being a simple buy-sell signal). The methodology uses average utility of terminal wealth as the fitness function, as a means of adjusting returns for risk. With data on daily stock prices from 1985 to 2005, the algorithm finds trading rules for 24 individual stocks. These rules then are applied to out-of-sample data to test adaptive efficiency of these markets. Applying more stringent thresholds to choose the trading rules to be applied out-of-sample (an extension of previous research) improves out-of-sample fitness; however, the rules still do not outperform the simple buy-and-hold strategy. These findings therefore imply that the 24 stock markets studied were adaptively efficient during the period under study
    • …
    corecore