147,951 research outputs found

    Time Blocks Decomposition of Multistage Stochastic Optimization Problems

    Full text link
    Multistage stochastic optimization problems are, by essence, complex because their solutions are indexed both by stages (time) and by uncertainties (scenarios). Their large scale nature makes decomposition methods appealing.The most common approaches are time decomposition --- and state-based resolution methods, like stochastic dynamic programming, in stochastic optimal control --- and scenario decomposition --- like progressive hedging in stochastic programming. We present a method to decompose multistage stochastic optimization problems by time blocks, which covers both stochastic programming and stochastic dynamic programming. Once established a dynamic programming equation with value functions defined on the history space (a history is a sequence of uncertainties and controls), we provide conditions to reduce the history using a compressed "state" variable. This reduction is done by time blocks, that is, at stages that are not necessarily all the original unit stages, and we prove areduced dynamic programming equation. Then, we apply the reduction method by time blocks to \emph{two time-scales} stochastic optimization problems and to a novel class of so-called \emph{decision-hazard-decision} problems, arising in many practical situations, like in stock management. The \emph{time blocks decomposition} scheme is as follows: we use dynamic programming at slow time scale where the slow time scale noises are supposed to be stagewise independent, and we produce slow time scale Bellman functions; then, we use stochastic programming at short time scale, within two consecutive slow time steps, with the final short time scale cost given by the slow time scale Bellman functions, and without assuming stagewise independence for the short time scale noises

    The international stock pollutant control: a stochastic formulation

    Get PDF
    In this paper we provide a stochastic dynamic game formulation of the economics of international environmental agreements on the transnational pollution control when the environmental damage arises from stock pollutant that accumulates, for accumulating pollutants such as CO2 in the atmosphere. To improve the cooperative and the noncooperative equilibrium among countries, we propose the criteria of the minimization of the expected discounted total cost. Moreover, we consider Stochastic Dynamic Games formulated as Stochastic Dynamic Programming and Cooperative versus Noncooperative Stochastic Dynamic Games. The performance of the proposed schemes is illustrated by a real data based example.Stochastic optimal control, Markov decision processes, Stochastic dynamic programming, Stochastic dynamic games, International pollutant control, Environmental economics, Sustainability,

    The international stock pollutant control: a stochastic formulation with transfers

    Get PDF
    This paper provides a formulation of a stochastic dynamic game that arise in the real scenario of international environmental agreements on the transnational pollution control. More specifically, this agreements try to reduce the environmental damage caused by the stock pollutant that accumulates in the atmosphere, such as CO2. To improve the non-cooperative equilibrium among countries, we propose the criteria of the minimization of the expected discounted total cost with monetary transfers between the countries involved as an incentive to cooperation. Moreover, it considers the formulation of Stochastic Dynamic Games as Markov Decision Processes, using tools of Stochastic Optimal Control and Stochastic Dynamic Programming. The performance of the proposed schemes is illustrated by its application to such environmental problem.Environmental pollutant control, Markov decision processes, Stochastic dynamic programming, Stochastic dynamic games, Optimal abatement policies

    Partially Observed Non-linear Risk-sensitive Optimal Stopping Control for Non-linear Discrete-time Systems

    Get PDF
    In this paper we introduce and solve the partially observed optimal stopping non-linear risk-sensitive stochastic control problem for discrete-time non-linear systems. The presented results are closely related to previous results for finite horizon partially observed risk-sensitive stochastic control problem. An information state approach is used and a new (three-way) separation principle established that leads to a forward dynamic programming equation and a backward dynamic programming inequality equation (both infinite dimensional). A verification theorem is given that establishes the optimal control and optimal stopping time. The risk-neutral optimal stopping stochastic control problem is also discussed

    The international stock pollutant control: a stochastic formulation

    Get PDF
    In this paper we provide a stochastic dynamic game formulation of the economics of international environmental agreements on the transnational pollution control when the environmental damage arises from stock pollutant that accumulates, for accumulating pollutants such as CO2 in the atmosphere. To improve the cooperative and the noncooperative equilibrium among countries, we propose the criteria of the minimization of the expected discounted total cost. Moreover, we consider Stochastic Dynamic Games formulated as Stochastic Dynamic Programming and Cooperative versus Noncooperative Stochastic Dynamic Games. The performance of the proposed schemes is illustrated by a real data based example

    EVALUATING FARMLAND INVESTMENTS CONSIDERING DYNAMIC STOCHASTIC RETURNS AND FARMLAND PRICES

    Get PDF
    This paper examines farmland investment decisions using a stochastic dynamic programming framework. Consideration is given to the dynamic, stochastic nature of farmland returns, linkages between farmland returns and farmland prices, and the effects of the above dynamic factors on a farmÂ’s financial structure. Optimal decisions to purchase or sell farmland are found for a central Illinois farm with high quality farmland. Sizes and debt distributions are then determined, given that the optimal decision rule is followed. Decisions from the dynamic programming model also are compared to a capital budgeting model.Land Economics/Use,
    • …
    corecore