35 research outputs found

    The Deterministic Impulse Control Maximum Principle in Operations Research: Necessary and Sufficient Optimality Conditions (replaces CentER DP 2011-052)

    Get PDF
    This paper considers a class of optimal control problems that allows jumps in the state variable. We present the necessary optimality conditions of the Impulse Control Maximum Principle based on the current value formulation. By reviewing the existing impulse control models in the literature, we point out that meaningful problems do not satisfy the sufficiency conditions. In particular, such problems either have a concave cost function, contain a fixed cost, or have a control-state interaction, which have in common that they each violate the concavity hypotheses used in the sufficiency theorem. The implication is that the corresponding problem in principle has multiple solutions that satisfy the necessary optimality conditions. Moreover, we argue that problems with fixed cost do not satisfy the conditions under which the necessary optimality conditions can be applied. However, we design a transformation, which ensures that the application of the Impulse Control Maximum Principle still provides the optimal solution. Finally, we show for the first time that for some existing models in the literature no optimal solution exists.Impulse Control Maximum Principle;Optimal Control;discrete continuous system;state-jumps;present value formulation.

    Numerical Algorithms for Deterministic Impulse Control Models with Applications

    Get PDF
    Abstract: In this paper we describe three different algorithms, from which two (as far as we know) are new in the literature. We take both the size of the jump as the jump times as decision variables. The first (new) algorithm considers an Impulse Control problem as a (multipoint) Boundary Value Problem and uses a continuation technique to solve it. The second (new) approach is the continuation algorithm that requires the canonical system to be solved explicitly. This reduces the infinite dimensional problem to a finite dimensional system of, in general, nonlinear equations, without discretizing the problem. Finally, we present a gradient algorithm, where we reformulate the problem as a finite dimensional problem, which can be solved using some standard optimization techniques. As an application we solve a forest management problem and a dike heightening problem. We numerically compare the efficiency of our methods to other approaches, such as dynamic programming, backward algorithm and value function approach

    An Impulse Control Approach to Dike Height Optimization (Revised version of CentER DP 2011-097)

    Get PDF
    Abstract: This paper determines the optimal timing of dike heightenings as well as the corresponding optimal dike heightenings to protect against floods. To derive the optimal policy we design an algorithm based on the Impulse Control Maximum Principle. In this way the paper presents one of the first real life applications of the Impulse Control Maximum Principle developed by Blaquiere. We show that the proposed Impulse Control approach performs better than Dynamic Programming with respect to computational time. This is caused by the fact that Impulse Control does not need discretization in time.

    Impulse control maximum principle: Theory and applications

    Get PDF
    The contribution of this paper is threefold. First, this thesis extends the existing theory on Impulse Control by deriving the necessary optimality conditions in current value formulation and provide a transformation such that the Impulse Control Maximum Principle can be applied to problems having a fixed cost. Moreover, this thesis points out that meaningful problems do not satisfy the sufficiency conditions. Second, in this thesis the Impulse Control Maximum Principle is applied to dike height optimization, forest management and product innovation. Third, this thesis describes several algorithms that can be used to solve Impulse Control problem

    Numerical Algorithms for Deterministic Impulse Control Models with Applications

    Get PDF
    Abstract: In this paper we describe three different algorithms, from which two (as far as we know) are new in the literature. We take both the size of the jump as the jump times as decision variables. The first (new) algorithm considers an Impulse Control problem as a (multipoint) Boundary Value Problem and uses a continuation technique to solve it. The second (new) approach is the continuation algorithm that requires the canonical system to be solved explicitly. This reduces the infinite dimensional problem to a finite dimensional system of, in general, nonlinear equations, without discretizing the problem. Finally, we present a gradient algorithm, where we reformulate the problem as a finite dimensional problem, which can be solved using some standard optimization techniques. As an application we solve a forest management problem and a dike heightening problem. We numerically compare the efficiency of our methods to other approaches, such as dynamic programming, backward algorithm and value function approach.

    The Deterministic Impulse Control Maximum Principle in Operations Research: Necessary and Sufficient Optimality Conditions (replaces CentER DP 2011-052)

    No full text
    This paper considers a class of optimal control problems that allows jumps in the state variable. We present the necessary optimality conditions of the Impulse Control Maximum Principle based on the current value formulation. By reviewing the existing impulse control models in the literature, we point out that meaningful problems do not satisfy the sufficiency conditions. In particular, such problems either have a concave cost function, contain a fixed cost, or have a control-state interaction, which have in common that they each violate the concavity hypotheses used in the sufficiency theorem. The implication is that the corresponding problem in principle has multiple solutions that satisfy the necessary optimality conditions. Moreover, we argue that problems with fixed cost do not satisfy the conditions under which the necessary optimality conditions can be applied. However, we design a transformation, which ensures that the application of the Impulse Control Maximum Principle still provides the optimal solution. Finally, we show for the first time that for some existing models in the literature no optimal solution exists

    A tutorial on the deterministic impulse control maximum principle: Necessary and sufficient optimality conditions

    No full text
    This paper considers a class of optimal control problems that allows jumps in the state variable. We present the necessary optimality conditions of the Impulse Control Maximum Principle based on the current value formulation. By reviewing the existing impulse control models in the literature, we point out that meaningful problems do not satisfy the sufficiency conditions. In particular, such problems either have a concave cost function, contain a fixed cost, or have a control-state interaction, which have in common that they each violate the concavity hypotheses used in the sufficiency theorem. The implication is that the corresponding problem in principle has multiple solutions that satisfy the necessary optimality conditions. Moreover, we argue that problems with fixed cost do not satisfy the conditions under which the necessary optimality conditions can be applied. However, we design a transformation, which ensures that the application of the Impulse Control Maximum Principle still provides the optimal solution. Finally, we show for the first time that for some existing models in the literature no optimal solution exists
    corecore