258 research outputs found

    Maximally Machine-Learnable Portfolios

    Full text link
    When it comes to stock returns, any form of predictability can bolster risk-adjusted profitability. We develop a collaborative machine learning algorithm that optimizes portfolio weights so that the resulting synthetic security is maximally predictable. Precisely, we introduce MACE, a multivariate extension of Alternating Conditional Expectations that achieves the aforementioned goal by wielding a Random Forest on one side of the equation, and a constrained Ridge Regression on the other. There are two key improvements with respect to Lo and MacKinlay's original maximally predictable portfolio approach. First, it accommodates for any (nonlinear) forecasting algorithm and predictor set. Second, it handles large portfolios. We conduct exercises at the daily and monthly frequency and report significant increases in predictability and profitability using very little conditioning information. Interestingly, predictability is found in bad as well as good times, and MACE successfully navigates the debacle of 2022

    Canonical Portfolios: Optimal Asset and Signal Combination

    Get PDF
    We present a novel framework for analyzing the optimal asset and signal combination problem, which builds upon the dynamic portfolio selection problem of Brandt and Santa-Clara (2006) in two phases. First, we reformulate their original investment problem into a tractable vehicle, which admits a closed-form solution that scales to large dimensions by imposing a joint Gaussian structure on the asset returns and signals. Second, we recast the optimal portfolio of correlated assets and signals into a set of uncorrelated managed portfolios through the lens of Canonical Correlation Analysis of Hotelling (1936). The new investment environment of uncorrelated managed portfolios offers unique economic insights into the joint correlation structure of our optimal portfolio policy. We also operationalize our theoretical framework to bridge the gap between theory and practice and showcase the improved performance of our proposed method over natural competing benchmarks

    Dynamic Asset Allocation in a Conditional Value-at-risk Framework

    Get PDF
    The thesis first extends the original Black-Litterman model to dynamic asset allocation area by using the expected conditional equilibrium return and conditional covariances based on three volatility models (the DCC model, the EWMA model and the RW model) into the reverse optimisation of the utility function (the implied BL portfolio) and the maximised Sharpe ratio optimisation model (the SR-BL portfolio). The momentum portfolios are inputted as the view portfolios in the Black-Litterman model. The thesis compares performance of the dynamic implied BL portfolio and the dynamic SR-BL portfolio in the single period and multiple periods with in-sample analysis and out-of-sample analysis. The research finds that dynamic BL portfolios can beat benchmark in in-sample and out-of-sample analysis, the dynamic implied BL portfolio always show better performance than the dynamic SR-BL portfolio. The empirical VaR and CVaR of the dynamic SR-BL portfolios are much higher than that of the dynamic implied BL portfolio. The dynamic BL portfolios based on the DCC volatility model perform best in contrast to other two volatility models. In the aim of improving performance of SR-BL portfolios, the thesis further constructs dynamic BL portfolios based on two new optimisation models including maximised reward to VaR ratio optimisation model (MVaR-BL portfolios) and maximised reward to CVaR ratio optimisation model (MCVaR-BL portfolios) with assumption of the normal distribution and the t-distribution at confidence levels of 99%, 95% and 90%. The thesis compares performance of the dynamic MVaR-BL portfolio and the dynamic MCVaR-BL portfolio in the single period and multiple periods with in-sample analysis and out-of-sample analysis. There are three main findings. Firstly, both the MVaR-BL portfolio and the MCVaR-BL portfolio could improve the dynamic SR-BL portfolio performance at moderate confidence levels. Secondly, the MVaR-BL portfolio and the MCVaR-BL portfolio show similar performance with normal distribution assumption, the MCVaR-BL portfolio performs better than the MVaR-BL with t-distribution assumption at certain confidence levels in single period and multiple periods. Thirdly, the performance of the DCC-BL portfolio with t-distribution assumption is superior to the performance of the DCC-BL portfolio with normal distribution assumption. As the result of higher empirical VaR and CVaR of dynamic SR-BL portfolios, the thesis develops to constrain VaR and CVaR in construction of dynamic BL portfolios with assumption of the normal distribution and the t-distribution at confidence levels of 99%, 95% and 90%. The research studies the effect of assumptions of two distributions, three confidence levels and levels of the VaR constraint and the CVaR constraint on dynamic BL portfolios. Both in-sample performance and out-of-sample performance could be improved by imposing constraints, and they suggest adding moderate CVaR constraints to maximal Sharpe ratio optimisation model with t-distribution at certain confidence level could obtain the best dynamic DCC-BL portfolio performance in the single period and multiple periods. The performance evaluation criterion (higher Sharpe ratio, reward to VaR ratio, and reward to CVaR ratio) would affect the choice of optimisation models in dynamic asset allocation

    Essays on variance risk

    Get PDF
    My PhD thesis consists of three papers which study the nature, structure, dynamics and price of variance risks. As tool I make use of multivariate affine jump-diffusion models with matrix-valued state spaces. The first chapter proposes a new three-factor model for index option pricing. A core feature of the model are unspanned skewness and term structure effects, i.e., it is possible that the structure of the volatility surface changes without a change in the volatility level. The model reduces pricing errors compared to benchmark two-factor models by up to 22%. Using a decomposition of the latent state, I show that this superior performance is directly linked to a third volatility factor which is unrelated to the volatility level. The second chapter studies the price of the smile, which is defined as the premia for individual option risk factors. These risk factors are directly linked to the variance risk premium (VRP). I find that option risk premia are spanned by mid-run and long-run volatility factors, while the large high-frequency factor does not enter the price of the smile. I find the VRP to be unambiguously negative and decompose it into three components: diffusive risk, jump risk and jump intensity risk. The distinct term structure patterns of these components explain why the term structure of the VRP is downward sloping in normal times and upward sloping during market distress. In predictive regressions, I find an economically relevant predictive power over returns to volatility positions and S&P 500 index returns. The last chapter introduces several numerical methods necessary for estimating matrix-valued affine option pricing models, including the Matrix Rotation Count algorithm and a fast evaluation scheme for the Likelihood function

    Higher-moment stochastic discount factor specifications and the cross-section of asset returns

    Get PDF
    The stochastic discount factor model provides a general framework for pricing assets. A suitably specified discount factor encompasses most of the theories currently in use, including the CAPM, consumption CAPM, higher-moment CAPM and their conditional versions. In this thesis, we focus on the empirical admissibility of alternative SDFs under restrictions that ensure that investors’ risk-preferences are well behaved. More innovatively, we explore whether the SDF implied by the 3 and 4-moment CAPM is plausible under restrictions that are weaker than those considered by Dittmar (2002) yet sufficient to rule out implausible curvature of the representative investor’s utility functions. We find that, even under these weaker restrictions, the 3 and 4-moment CAPM cannot solve well known puzzles which plague the empirical performance of extant rational asset pricing models, even though the higher order terms do generate considerable additional explanatory power. Faced with this difficulty, we then explore whether the failure to fully account for cross-sectional differences in average returns can be explained by the presence of either transaction costs or a behavioural component of the SDF, reflecting investors’ systematic mistakes in processing information. We find evidence of both problems, though our analysis is not conclusive in this respect. Finally, in a more applied exercise, we apply the SDF-framework to test whether Chinese fund managers generate superior investment performance, and find that Chinese fund managers have not achieved better performance than the individual investors under either the unconditional or the conditional measure

    A discussion of data enhancement and optimization techniques for a fund of hedge funds portfolio

    Get PDF
    Ziel dieser Arbeit ist es, einen Überblick über die verschiedenen Techniken zur Datenanreicherung und Optimierung im Falle eines Fund of Hedge Funds Portfolios darzustellen, zu diskutieren und anhand von Experimenten zu illustrieren. Besonderes Augenmerk liegt dabei auch auf der Bewertung des Zusammenspiels der verschiedenen Datenanreicherungs- und Optimierungs-techniken. Erste Bausteine für ein integriertes computergestütztes Anwendungstool werden bereitgestellt und dokumentiert. Zudem werden Ideen für weitere Entwicklungen und Forschung vorgestellt. Zwei wesentliche Punkte unterscheiden diese Arbeit von ähnlichen, nämlich dass sie hauptsächlich auf Fund Level arbeitet und dass sie den gesamten Prozess, beginnend mit der Datenaufbereitung, über die Optimierung bis zur sachgerechten Evaluierung der Ergebnisse behandelt. Im ersten Teil wird das Thema im Kontext der Finanzwirtschaft verortet, der Begriff Hedge Fund definiert und die Relevanz der Aufgabenstellung erörtert. Neben dem schnellen Wachstum der Hedge Fund Industrie ist besonders das zunehmende Interesse von institutionellen Investoren ein wichtiger Grund quantitative, auf wissenschaftlichen Erkenntnissen aufbauende Methoden zur Unterstützung der Entscheidungsfindung bei der Auswahl von Hedge Funds bereitzustellen. Der zweite Teil beschäftigt sich mit der Frage der Datenaufbereitung. Generell gilt, dass der Output eines Optimierungs Algorithmus nur so gut sein kann, wie die Qualität der Input Daten mit denen er gefüttert wird. Dies trifft insbesondere auch auf den Fall von Hedge Funds zu, da die Datenlage hier als eher schwierig zu bezeichnen ist: Es werden nur monatliche Renditezahlen zur Verfügung gestellt und Informationen über Risiko Exposures sind nur schwer zu erhalten. Nachdem ein kurzer Literaturüberblick über die Hedge Fund spezifischen Datenprobleme und Verzerrungen gegeben wird werden die verwendeten Datenbanken anhand von einigen deskriptiven Merkmalen beschrieben. Besonderes Augenmerkt wird bei der Datenaufbereitung der hohen Autokorrelation in den Hedge Fund Renditen und dem Auffüllen kurzer Performance Zeitreihen gewidmet. Ersteres weil eine hohe Autokorrelation fundamentalen Prinzipien der modernen Finanzwirtschaft widerspricht, zweiteres weil es zu einer besseren Einschätzung des Risikoprofils der betrachteten Hedge Funds führt. Zum Zwecke der Datenauffüllung werden im Einzelnen Ansätze über Faktormodelle und Clusteranalyse besprochen. Nach einer Übersicht über die in der Literatur vorgeschlagenen Risikofaktoren wird ein zentraler Gesichtspunkt, nämlich ist die Modellierung von nichtlinearen Zusammenhängen z.B. über Optionsstrukturen, genauer beleuchtet. Wichtige eigene Beiträge in diesem Kapitel sind die ökonomische Interpretation und Motivation des favorisierten Optionsstrukturmodells sowie Vorschläge und erste Experimente zur automatischen Modellselektion und zur Einbindung qualitativer Daten via Clusteranalyse. Der dritte Teil ist der Optimierung gewidmet. Die Hauptherausforderung ergibt sich aus der Tatsache, dass die Renditen von Hedge Fund Investments meist nicht normalverteilt sind. Da die traditionellen Konzepte der Finanzwirtschaft aber genau auf der Annahme von normalverteilten Renditen aufbauen, müssen alternative Konzepte angewandt werden. Nach einem kurzen Überblick über die klassische Mean-Variance Optimierung und Möglichkeiten robustere Ergebnisse zu bekommen, werden im Wesentlichen zwei Arten vorgestellt wie mit nicht normalverteilten Renditen umgegangen werden kann: parametrische Ansätze, die die höheren Momente (Schiefe und Kurtosis) der Verteilung berücksichtigen und nichtparametrische Ansätze, die mit historischen oder simulierten Szenarien und den sich daraus ergebenden diskreten Verteilungen arbeiten. Die Präferenzen des Investors können dabei über ein Dispersions- oder ein Quantilsmaß oder einer Kombination aus beidem erfasst werden. Danach werden Überlegungen angestellt wie einfache lineare und komplexere logische Nebenbedingungen eingesetzt und wie die vorgestellten Konzepte integriert werden können, speziell welche Datenaufbereitungstechniken mit welchen Optimierungsverfahren zusammenpassen. Im letzten Kapitel von Teil drei werden aufwendige Optimierungsexperimente durchgeführt und die neu gewonnen Erkenntnisse interpretiert. Die zentralen Erkenntnisse sind, dass die Wahl des Risikomaßes kaum Einfluss auf das letztinstanzliches Bewertungskriterium, die risikoadjustierte Out-Of-Sample Performance, hat und dass das Auffüllen von kurzen Performance Zeitreihen das Risiko Out-Of-Sample signifikant verbessert. Abschließend werden die Ergebnisse zusammengefasst und ein Ausblick auf zukünftige Forschungsarbeit gegeben.The aim of this thesis is to provide an overview and brief discussion, including some experiments, of techniques for data enhancement and optimization techniques for a fund of hedge funds. Special emphasis is placed on the interaction of the different data enhancement and optimization techniques. First building blocks for a computer based asset allocation tool are provided and documented. In addition it provides some ideas about future development and research. The two main points that distinguish this thesis from papers that treat a similar theme are that it operates on individual fund level and that it covers the whole process beginning with questions of data enhancement and parameter estimation up to proper evaluation of the outcomes. In the first chapter the theme is put in a broader context of finance, the term “hedge fund” gets defined and the relevance of the problem is reasoned. Besides the rapid growth rates in hedge fund industry the fact that more and more institutional investors invest in hedge funds is an important reason to provide decision support methods based on quantitative models and scientific findings. The second chapter deals with data enhancement. In general the proverb “garbage in – garbage out” holds true for every optimization algorithm, but it is especially true in the case of hedge funds as the data situation is very difficult in this field: only monthly data is provided and there is only little information about risk exposures. After a short literature overview about hedge fund specific data problems and biases descriptive statistics are provided for the two databases used in this thesis. With the data enhancement special emphasis is put on the high autocorrelation in hedge fund returns and on filling up track records of funds that are alive for a short time. The former because high autocorrelation is contradictory to fundamental principles of modern finance, the latter because it leads to a better understanding of a funds risk profile. For the purpose of filling up track records, factor model approaches and the use of cluster analysis are proposed. After a short literature overview about the risk factors considered in literature, the modeling of non linear dependencies, for example via option structures, is discussed on a broader basis as this topic is central in this thesis. Important own contributions in this context are the motivation and economic interpretation of the favored option structure model and some first experiments on automatic model selection and on integrating qualitative data via cluster analysis. The third chapter talks about optimization. The main challenge is the fact that hedge fund returns are not normally distributed. But as traditional concepts are based exactly on the assumption of normally distributed returns alternative concepts have to be used. After a short overview of classical mean-variance optimization and possibilities to get more robust outcomes, essentially two alternative concepts are introduced: parametrical approaches, that take higher moments (skewness and kurtosis) into account, and non parametrical approaches, that work with historical or simulated scenarios and with the discrete distributions resulting of these scenarios. With the second approach the preferences of an investor can be captured via a dispersion- or a quantile measure, or a combination of both. Then, different ways how linear and more complex logical constraints can be used are considered, and procedures how to integrate the concepts presented are discussed, especially which data enhancement and which optimization technique may fit together. In the last part of chapter 3 extensive optimization experiments are conducted and the outcome interpreted. The central findings are that the choice of the risk measure has no significant impact on the out of sample performance, which is the ultimate evaluation criterion; Filling up short track records on the other hand significantly improves the out of sample risk. Finally the findings are summarized and an outlook for future research is given

    Automatic Algorithm Selection for Complex Simulation Problems

    Get PDF
    To select the most suitable simulation algorithm for a given task is often difficult. This is due to intricate interactions between model features, implementation details, and runtime environment, which may strongly affect the overall performance. The thesis consists of three parts. The first part surveys existing approaches to solve the algorithm selection problem and discusses techniques to analyze simulation algorithm performance.The second part introduces a software framework for automatic simulation algorithm selection, which is evaluated in the third part.Die Auswahl des passendsten Simulationsalgorithmus für eine bestimmte Aufgabe ist oftmals schwierig. Dies liegt an der komplexen Interaktion zwischen Modelleigenschaften, Implementierungsdetails und Laufzeitumgebung. Die Arbeit ist in drei Teile gegliedert. Der erste Teil befasst sich eingehend mit Vorarbeiten zur automatischen Algorithmenauswahl, sowie mit der Leistungsanalyse von Simulationsalgorithmen. Der zweite Teil der Arbeit stellt ein Rahmenwerk zur automatischen Auswahl von Simulationsalgorithmen vor, welches dann im dritten Teil evaluiert wird

    Economic Engineering Modeling of Liberalized Electricity Markets: Approaches, Algorithms, and Applications in a European Context: Economic Engineering Modeling of Liberalized Electricity Markets: Approaches, Algorithms, and Applications in a European Context

    Get PDF
    This dissertation focuses on selected issues in regard to the mathematical modeling of electricity markets. In a first step the interrelations of electric power market modeling are highlighted a crossroad between operations research, applied economics, and engineering. In a second step the development of a large-scale continental European economic engineering model named ELMOD is described and the model is applied to the issue of wind integration. It is concluded that enabling the integration of low-carbon technologies appears feasible for wind energy. In a third step algorithmic work is carried out regarding a game theoretic model. Two approaches in order to solve a discretely-constrained mathematical program with equilibrium constraints using disjunctive constraints are presented. The first one reformulates the problem as a mixed-integer linear program and the second one applies the Benders decomposition technique. Selected numerical results are reported

    Computational methodology for modelling the dynamics of statistical arbitrage

    Get PDF
    Recent years have seen the emergence of a multi-disciplinary research area known as "Computational Finance". In many cases the data generating processes of financial and other economic time-series are at best imperfectly understood. By allowing restrictive assumptions about price dynamics to be relaxed, recent advances in computational modelling techniques offer the possibility to discover new "patterns" in market activity. This thesis describes an integrated "statistical arbitrage" framework for identifying, modelling and exploiting small but consistent regularities in asset price dynamics. The methodology developed in the thesis combines the flexibility of emerging techniques such as neural networks and genetic algorithms with the rigour and diagnostic techniques which are provided by established modelling tools from the fields of statistics, econometrics and time-series forecasting. The modelling methodology which is described in the thesis consists of three main parts. The first part is concerned with constructing combinations of time-series which contain a significant predictable component, and is a generalisation of the econometric concept of cointegration. The second part of the methodology is concerned with building predictive models of the mispricing dynamics and consists of low-bias estimation procedures which combine elements of neural and statistical modelling. The third part of the methodology controls the risks posed by model selection and performance instability through actively encouraging diversification across a "portfolio of models". A novel population-based algorithm for joint optimisation of a set of trading strategies is presented, which is inspired both by genetic and evolutionary algorithms and by modern portfolio theory. Throughout the thesis the performance and properties of the algorithms are validated by means of experimental evaluation on synthetic data sets with known characteristics. The effectiveness of the methodology is demonstrated by extensive empirical analysis of real data sets, in particular daily closing prices of FTSE 100 stocks and international equity indices
    corecore