18 research outputs found

    A Coupled Markov Chain Approach to Credit Risk Modeling

    Get PDF
    We propose a Markov chain model for credit rating changes. We do not use any distributional assumptions on the asset values of the rated companies but directly model the rating transitions process. The parameters of the model are estimated by a maximum likelihood approach using historical rating transitions and heuristic global optimization techniques. We benchmark the model against a GLMM model in the context of bond portfolio risk management. The proposed model yields stronger dependencies and higher risks than the GLMM model. As a result, the risk optimal portfolios are more conservative than the decisions resulting from the benchmark model

    Applications of non-convex optimization in portfolio selection

    Get PDF
    Die vorgelegte Arbeit befasst sich mit nicht-konvexer Optimierung in dem Gebiet der Portfolio Selection. Thematisch lĂ€sst sich die Arbeit in zwei Teilgebiete strukturieren: (1) Das Lösen von Mean-Risk Problemen mit Value-at-Risk als Risikomaß: Es werden Methoden zum Auffinden von effizienten Portfolios fĂŒr den Fall von diskret verteilten Asset Returns vorgestellt. Die behandelten Probleme sind (wegen der Nicht-KonvexitĂ€t des Value-at-Risk) nicht konvex und lassen sich als Differenz von konvexen Funktionen darstellen. Es werden sowohl Branch-and-Bound als auch approximative Lösungsverfahren angewandt. Die globalen Lösungen des Branch-and-Bound werden mit den Lösungen der approximativen Verfahren verglichen. (2) Robustifizierung von Portfolio-Selection Problemen: In den letzten Jahren gibt es in der Literatur verstĂ€rkt BemĂŒhungen Optimierungsprobleme bezĂŒglich Unsicherheiten in den Parametern zu robustifizieren. Robustifizierte Lösungen haben die Eigenschaft, dass moderate Variationen von Parametern nicht zu dramatischen Verschlechterungen der Lösungen fĂŒhren. Im Rahmen der robusten Portfolio Optimierung geht es hauptsĂ€chlich darum, Lösungen in Bezug auf Abweichungen in den Verteilungen der Gewinne der verwendeten Finanzinstrumente zu kontrollieren. In der gegenstĂ€ndlichen Arbeit werden mit Hilfe von Wahrscheinlichkeitsmetriken sogenannte Ambiguity Mengen definiert, welche alle Verteilungen enthalten, die aufgrund der Datenlage als mögliche Verteilungen in Frage kommen. Die verwendete Metrik, die sogenannte Kantorovich (Wasserstein) Metrik, ermöglicht es mittels Ergebnissen der nichtparametrischen Statistik, die Ambiguity Mengen als Konfidenzmengen um die empirischen VerteilungschĂ€tzer zu interpretieren. Mittels der beschriebenen Methoden werden Mean-Risk Probleme robustifiziert. Diese Probleme sind zunĂ€chst infinit und werden in einem weiteren Schritt zu nicht konvexen semi-definiten Problemen umformuliert. Die Lösung dieser Probleme basiert einerseits auf einem Algortihmus zum Lösen von semi-definiten Problemen mit unendlich vielen Nebenbedingungen und andererseits auf Methoden zum approximativen Lösen von nicht konvexen Problemen (dem sogenannten Difference of Convex Algorithm).The thesis is concerned with application of non-convex programming to problems of portfolio optimization in a single stage stochastic optimization framework. In particular two different classes of portfolio selection problems are investigated. In both the problems a scenario based approach to modeling uncertainty is pursued, i.e. the randomness in the models is always described by finitely many joint realizations of the asset returns. The thesis is structured into three chapters briefly outlined below: (1) A D.C. Formulation of Value-at-Risk constrained Optimization: In this Chapter the aim is to solve mean risk models with the Value-at-Risk as a risk measure. In the case of finitely supported return distributions, it is shown that the Value-at-Risk can be written as a D.C. function and the mentioned mean risk problem therefore corresponds to a D.C. problem. The non-convex problem of optimizing the Value at Risk is rather extensively treated in the literature and there are various approximative solution techniques as well as some approaches to solve the problem globally. The reformulation as D.C. problem provides an insight into the structure of the problem, which can be exploited to devise a Branch-and-Bound algorithm for finding global solutions for small to medium sized instances. The possibility of refining epsilon-optimal solutions obtained from the Branch-and-Bound framework via local search heuristics is also discussed in this Chapter. (2) Value-at-Risk constrained optimization using the DCA: In this part of the thesis the Value-at-Risk problem is once again investigated with the aim of solving problems of realistic sizes in relatively short time. Since the Value at Risk optimization can be shown to be a NP hard problem, this can only be achieved by sacrificing on the guaranteed globality of the solutions. Therefore a local solution technique for unconstrained D.C. problems called Difference of Convex Algorithm (DCA) is employed. To solve the problem a new variant of the DCA the so called 'hybrid DCA' is proposed, which preserves the favorable convergence properties of the computationally hard 'complete DCA' as well as the computational tractability of the so called 'simple DCA'. The results are tested for small problems and the solutions are shown to actually coincide with the global optima obtained with the Branch-and-Bound algorithm in most of the cases. For realistic problem sizes the proposed method is shown to consistently outperform known heuristic approximations implemented in commercial software. (3) A Framework for Optimization under Ambiguity: The last part of the thesis is devoted to a different topic which received much attention in the recent stochastic programming literature: the topic of robust optimization. More specifically the aim is to robustify single stage stochastic optimization models with respect to uncertainty about the distributions of the random variables involved in the formulation of the stochastic program. The aim is to explore ways of explicitly taking into account ambiguity about the distributions when finding a decision while imposing only very weak restrictions on possible probability models that are taken into consideration. Ambiguity is defined as possible deviation from a discrete reference measure Q (in this work the empirical measure). To this end a so called ambiguity set B, that contains all the measures that can reasonably be assumed to be the real measure P given the available data, is defined. Since the idea is to devise a general approach not restricted by assuming P to be an element of any specific parametric family, we define our ambiguity sets by the use of general probability metrics. Relative to these measures a worst case approach is adopted to robustify the problem with respect to B. The resulting optimization problems turn out to be infinite and are reduced to non-convex semi-definite problems. In the last part of the paper we show how to solve these problems numerically for the example of a mean risk portfolio selection problem with Expected Shortfall under a Threshold as the risk measure. The DCA in combination with an iterative algorithm to approximate the infinite set of constraints by finitely many ones is used to obtain numerical solutions to the problem

    The effect of intermittent renewables on the electricity price variance

    Get PDF
    First online: 07 March 2015The dominating view in the literature is that renewable electricity production increases the price variance on spot markets for electricity. In this paper, we critically review this hypothesis. Using a static market model, we identify the variance of the infeed from intermittent electricity sources (IES) and the shape of the industry supply curve as two pivotal factors influencing the electricity price variance. The model predicts that the overall effect of IES infeed depends on the produced amount: while small to moderate quantities of IES tend to decrease the price variance, large quantities have the opposite effect. In the second part of the paper, we test these predictions using data from Germany, where investments in IES have been massive in the recent years. The results of this econometric analysis largely conform to the predictions from the theoretical model. Our findings suggest that subsidy schemes for IES capacities should be complemented by policy measures supporting variance absorbing technologies such as smart-grids, energy storage, or grid interconnections to ensure the build-up of sufficient capacities in time

    The Value of Coordination in Multimarket Bidding of Grid Energy Storage

    No full text
    We consider the problem of a storage owner who trades in a multisettlement electricity market comprising an auction-based day-ahead market and a continuous intraday market. We show in a stylized model that a coordinated policy that reserves capacity for the intraday market is optimal and that the gap to a sequential policy increases with intraday price volatility and market liquidity. To assess the value of coordination in a realistic setting, we develop a multistage stochastic program for day-ahead bidding and hourly intraday trading along with a corresponding stochastic price model. We show how tight upper bounds can be obtained based on calculating optimal bilinear penalties for a novel information relaxation scheme. To calculate lower bounds, we propose a scenario tree generation method that lends itself to deriving an implementable policy based on reoptimization. We use these methods to quantify the value of coordination by comparing our policy with a sequential policy that does not coordinate day-ahead and intraday bids. In a case study, we find that coordinated bidding is most valuable for flexible storage assets with high price impact, like pumped-hydro storage. For small assets with low price impact, like battery storage, participation in the day-ahead auction is less important and intraday trading appears to be sufficient. For less flexible assets, like large hydro reservoirs without pumps, intraday trading is hardly profitable as most profit is made in the day-ahead market. A comparison of lower and upper bounds demonstrates that our policy is near-optimal for all considered assets

    Ambiguity in portfolio selection

    No full text
    In this paper, we consider the problem of finding optimal portfolios in cases when the underlying probability model is not perfectly known. For the sake of robustness, a maximin approach is applied which uses a 'confidence set' for the probability distribution. The approach shows the tradeoff between return, risk and robustness in view of the model ambiguity. As a consequence, a monetary value of information in the model can be determined.Portfolio optimization, Robustness, Minimax,

    A Coupled Markov Chain Approach to Credit Risk Modeling

    No full text
    We propose a Markov chain model for credit rating changes. We do not use any distributional assumptions on the asset values of the rated companies but directly model the rating transitions process. The parameters of the model are estimated by a maximum likelihood approach using historical rating transitions and heuristic global optimization techniques. We benchmark the model against a GLMM model in the context of bond portfolio risk management. The proposed model yields stronger dependencies and higher risks than the GLMM model. As a result, the risk optimal portfolios are more conservative than the decisions resulting from the benchmark model.
    corecore