10 research outputs found

    Energy only, capacity market and security of supply. A stochastic equilibrium analysis

    Get PDF
    Former generation capacity expansion models were formulated as optimization problems. These included a reliability criterion and hence guaranteed security of supply. The situation is different in restructured markets where investments need to be incentivised by the margin resulting from electricity sales after accounting for fuel costs. The situation is further complicated by the payments and charges on the carbon market. We formulate an equilibrium model of the electricity sector with both investments and operations. Electricity prices are set at the fuel cost of the last operating unit when there is no curtailment, and at some regulated price cap when there is curtailment. There is a CO2 market and different policies for allocating allowances. Todays situation is quite risky for investors. Fuel prices are more volatile than ever; the total amount of CO2 allowances and the allocation method will only be known after investments has been decided. The equilibrium model is thus one under uncertainty. Agents can be risk neutral or risk averse. We model risk aversion through a CVaR of the net margin of the industry. The CVaR induces a risk neutral probability according to which investors value their plants. The model is formulated as a complementarity problem (including the CVaR valuation of investment). An illustration is provided on a small problem that captures the essence of today electricity world: a choice restricted to coal and gas, a peaky load curve because of wind penetration, uncertain fuel prices and an evolving carbon market (EU-ETS). We show that we might have problem of security of supply if we do not implement a capacity market.capacity adequacy, risk functions, stochastic equilibrium models

    Tuned risk aversion as interpretation of non-expected utility preferences

    Get PDF
    We introduce the notion of Tuned Risk Aversion as a possible interpretation of non-expected utility preferences. It refers to tuning patterns of risk (and ambiguity) aversion to the composition of a lottery (or act) at hand, assuming only an overall ‘budget’ for accumulated risk aversion over its sub-lotteries. This makes the risk aversion level applied to a part intrinsically depending on the whole, in a way that turns out to be in line with frequently observed deviations from the Sure-Thing Principle. This is illustrated by applying the concept to the Allais paradox and to the 50:51 example, related to ambiguity aversion. We give a general justification for applying the method in contexts where the law of one price does not hold, and derive unique updating from a substitution axiom induced by a non-recursive form of consistency. In a third example, we propose a solution to a well-known puzzle on consistency of decision making in the Ellsberg parado

    Robust, risk-sensitive, and data-driven control of Markov Decision Processes

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Sloan School of Management, Operations Research Center, 2007.Includes bibliographical references (p. 201-211).Markov Decision Processes (MDPs) model problems of sequential decision-making under uncertainty. They have been studied and applied extensively. Nonetheless, there are two major barriers that still hinder the applicability of MDPs to many more practical decision making problems: * The decision maker is often lacking a reliable MDP model. Since the results obtained by dynamic programming are sensitive to the assumed MDP model, their relevance is challenged by model uncertainty. * The structural and computational results of dynamic programming (which deals with expected performance) have been extended with only limited success to accommodate risk-sensitive decision makers. In this thesis, we investigate two ways of dealing with uncertain MDPs and we develop a new connection between robust control of uncertain MDPs and risk-sensitive control of dynamical systems. The first approach assumes a model of model uncertainty and formulates the control of uncertain MDPs as a problem of decision-making under (model) uncertainty. We establish that most formulations are at least NP-hard and thus suffer from the "'curse of uncertainty." The worst-case control of MDPs with rectangular uncertainty sets is equivalent to a zero-sum game between the controller and nature.(cont.) The structural and computational results for such games make this formulation appealing. By adding a penalty for unlikely parameters, we extend the formulation of worst-case control of uncertain MDPs and mitigate its conservativeness. We show a duality between the penalized worst-case control of uncertain MDPs with rectangular uncertainty and the minimization of a Markovian dynamically consistent convex risk measure of the sample cost. This notion of risk has desirable properties for multi-period decision making, including a new Markovian property that we introduce and motivate. This Markovian property is critical in establishing the equivalence between minimizing some risk measure of the sample cost and solving a certain zero-sum Markov game between the decision maker and nature, and to tackling infinite-horizon problems. An alternative approach to dealing with uncertain MDPs, which avoids the curse of uncertainty, is to exploit directly observational data. Specifically, we estimate the expected performance of any given policy (and its gradient with respect to certain policy parameters) from a training set comprising observed trajectories sampled under a known policy.(cont.) We propose new value (and value gradient) estimators that are unbiased and have low training set to training set variance. We expect our approach to outperform competing approaches when there are few system observations compared to the underlying MDP size, as indicated by numerical experiments.by Yann Le Tallec.Ph.D

    Conditional acceptability mappings

    Get PDF
    Conditional Acceptability Mappings beschreiben die Akzeptanz von Zufallsvariablen bedingt auf die verfügbare nichttriviale Information. Sie können als Abbildungen zwischen Wahrscheinlichkeitsräumen modelliert werden, wobei die die zur Bewertung verfügbare Information berücksichtigt wird. Zusätzlich wird von derartigen Abbildungen Konkavität, Translationsequivarianz und Monotonie gefordert. Basierend auf den Ordnungseigenschaften - insbesondere der Ordnungsvollständigkeit - von Lp-Räumen, die als Banachverbände interpretierbar sind, werden das Superdifferential und die Fenchel-Moreau Konjugierte von konkaven bedingten Abbildungen definiert, sowie deren Eigenschaften untersucht. Die konsequente Nutzung der fast sicheren Halbordnung zu diesem Zweck ist neu in der Literatur und vereinfacht im Folgenden Argumentation und Beweisführung bei gleichzeitiger Rücksichtnahme auf alle Bedenken hinsichtlich Stetigkeit, Integrierbarkeit und Meßbarkeit der resultierenden Supergradienten und Konjugierten. Abschließend werden die Ergebnisse über bedingte Abbildungen herangezogen, um Aussagen über jene bisher in der Literatur beschriebenen Ansätze für mehrperiodige Akzeptanzmaße zu gewinnen, die sich in ihrer Konstruktion auf Conditional Acceptability Mappings stützen: SEC-Funktionale und verkettete Acceptability Mappings. Insbesondere wird für letztere eine Kettenregel für das Superdifferential, sowie eine einfache Darstellung der konjugierten Abbildung hergeleitet.Conditional Acceptability Mappings quantify the degree of desirability of random variables modeling financial returns, accounting for available, non-trivial information. They are defined as mappings probability spaces, where nontrivial information is available. Additionally, such mappings have to be concave, translation- equivariant and monotonically increasing. Based on the order characteristics of Lp-spaces, superdifferentials and concave conjugates for conditional acceptability mappings are defined and analyzed. The novelty of this work is that the almost sure partial order is consequently used for this purpose, which results in simpler definitions and proofs, but also accounts for all requirements concerning continuity, integrability and measurability of the supergradients and conjugates. Furthermore, the results about conditional mappings are used to show properties of multiperiod acceptability functionals that are based on conditional acceptability mappings, such as SEC-functionals and acceptability compositions. A chain rule for superdifferentials as well as the conjugate of multiperiod functionals and their properties are derived

    Risk and robust optimization

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 203-213).This thesis develops and explores the connections between risk theory and robust optimization. Specifically, we show that there is a one-to-one correspondence between a class of risk measures known as coherent risk measures and uncertainty sets in robust optimization. An important consequence of this is that one may construct uncertainty sets, which are the critical primitives of robust optimization, using decision-maker risk preferences. In addition, we show some results on the geometry of such uncertainty sets. We also consider a more general class of risk measures known as convex risk measures, and show that these risk measures lead to a more flexible approach to robust optimization. In particular, these models allow one to specify not only the values of the uncertain parameters for which feasibility should be ensured, but also the degree of feasibility. We show that traditional, robust optimization models are a special case of this framework. As a result, this framework implies a family of probability guarantees on infeasibility at different levels, as opposed to standard, robust approaches which generally imply a single guarantee.(cont.) Furthermore, we illustrate the performance of these risk measures on a real-world portfolio optimization application and show promising results that our methodology can, in some cases, yield significant improvements in downside risk protection at little or no expense in expected performance over traditional methods. While we develop this framework for tile case of linear optimization under uncertainty, we show how to extend the results to optimization over more general cones. Moreover, our methodology is scenario-based, and( we prove a new rate of convergence result on a specific class of convex risk measures. Finally, we consider a multi-stage problem under uncertainty, specifically optimization of quadratic functions over un-certain linear systems. Although the theory of risk measures is still undeveloped with respect to dynamic optimization problems. we show that a set-based model of uncertainty yields a tractable approach to this problem in the presence of constraints. Moreover, we are able to derive a near-closed form solution for this approach and prove new probability guarantees on its resulting performance.by David Benjamin Brown.Ph.D

    Multi-level, Multi-stage and Stochastic Optimization Models for Energy Conservation in Buildings for Federal, State and Local Agencies

    Get PDF
    Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework

    Conflicting Objectives in Decisions

    Get PDF
    This book deals with quantitative approaches in making decisions when conflicting objectives are present. This problem is central to many applications of decision analysis, policy analysis, operational research, etc. in a wide range of fields, for example, business, economics, engineering, psychology, and planning. The book surveys different approaches to the same problem area and each approach is discussed in considerable detail so that the coverage of the book is both broad and deep. The problem of conflicting objectives is of paramount importance, both in planned and market economies, and this book represents a cross-cultural mixture of approaches from many countries to the same class of problem

    A note on the Swiss Solvency Test risk measure

    No full text
    In this paper we examine whether the Swiss Solvency Test risk measure is a coherent measure of risk as introduced in Artzner et al. [Artzner, P., Delbaen, F., Eber, J.M., Heath, D., 1999. Coherent measures of risk. Math. Finance 9, 203-228; Artzner, P., Delbaen, F., Eber, J.M., Heath, D., Ku, H., 2004. Coherent multiperiod risk adjusted values and Bellman's principle. Working Paper. ETH Zurich]. We provide a simple example which shows that it does not satisfy the axiom of monotonicity. We then find, as a monotonic alternative, the greatest coherent risk measure which is majorized by the Swiss Solvency Test risk measure.
    corecore