30,590 research outputs found

    On the Potential Use of Adaptive Control Methods for Improving Adaptive Natural Resource Management

    Get PDF
    The paradigm of adaptive natural resource management (AM), in which experiments are used to learn about uncertain aspects of natural systems, is gaining prominence as the preferred technique for administration of large-scale environmental projects. To date, however, tools consistent with economic theory have yet to be used to either evaluate AM strategies or improve decision-making in this framework. Adaptive control (AC) techniques provide such an opportunity. This paper demonstrates the conceptual link between AC methods, the alternative treatment of realized information during a planning horizon, and AM practices; shows how the different assumptions about the treatment of observational information can be represented through alternative dynamic programming model structures; and provides a means of valuing alternative treatments of information and augmenting traditional benefit-cost analysis through a decomposition of the value function. The AC approach has considerable potential to help managers prioritize experiments, plan AM programs, simulate potential AM paths, and justify decisions based on an objective valuation framework.adaptive control, adaptive management, dynamic programming, value of experimentation, value of information, Resource /Energy Economics and Policy,

    Taxation under Uncertainty – Problems of Dynamic Programming and Contingent Claims Analysis in Real Option Theory

    Get PDF
    This article deals with the integration of taxes into real option-based investment models under risk neutrality and risk averison. It compares the possible approaches dynamic programming and contingent claims analysis to analyze their effects on the optimal investment rules before and after taxes. It can be shown that despite their different assumptions, dynamic programming and contingent claims analysis yield identical investment thresholds under risk neutrality. In contrast, under risk aversion, there are severe problems in determining an adequate risk-adjusted discount rate. The application of contingent claims analysis is restricted to cases with a dividend rate unaffected by risk. Therefore, only dynamic programming permits an explicit investment threshold without taxation. After taxes, both approaches fail to reach general solutions. Nevertheless, using a sufficient condition, it is possible to derive neutral tax systems under risk aversion as is demonstrated by using dynamic programming.

    The Economics of Strategic Opportunity

    Get PDF
    As emphasized by Barney (1986), any explanation of superior profitability must account for why the resources supporting such profitability could have been acquired for a price below their rent generating capacity. Building upon the literature in economics on coordination failures and incomplete markets, we suggest a framework for analyzing such strategic factor market inefficiencies. Our point of departure is that a strategic opportunity exists whenever prices fail to reflect the value of a resource's best use. This paper examines the challenges of imputing a resource's value in the absence of explicit price guidance and suggests the likely characteristics of strategic opportunities. Our framework also suggests that the discovery of strategic opportunity is often a matter of serendipity and access to relevant idiosyncratic resources. This latter observation provides prescriptive advice, although the analysis also explains why more detailed guidance has to be firm specific.

    Magnifying Lens Abstraction for Stochastic Games with Discounted and Long-run Average Objectives

    Full text link
    Turn-based stochastic games and its important subclass Markov decision processes (MDPs) provide models for systems with both probabilistic and nondeterministic behaviors. We consider turn-based stochastic games with two classical quantitative objectives: discounted-sum and long-run average objectives. The game models and the quantitative objectives are widely used in probabilistic verification, planning, optimal inventory control, network protocol and performance analysis. Games and MDPs that model realistic systems often have very large state spaces, and probabilistic abstraction techniques are necessary to handle the state-space explosion. The commonly used full-abstraction techniques do not yield space-savings for systems that have many states with similar value, but does not necessarily have similar transition structure. A semi-abstraction technique, namely Magnifying-lens abstractions (MLA), that clusters states based on value only, disregarding differences in their transition relation was proposed for qualitative objectives (reachability and safety objectives). In this paper we extend the MLA technique to solve stochastic games with discounted-sum and long-run average objectives. We present the MLA technique based abstraction-refinement algorithm for stochastic games and MDPs with discounted-sum objectives. For long-run average objectives, our solution works for all MDPs and a sub-class of stochastic games where every state has the same value
    corecore