44 research outputs found

    Measure Driven Differential Inclusions

    Get PDF
    AbstractMeasure driven differential inclusions arise when we attempt to derive necessary conditions of optimality for optimal impulsive control problems with nonsmooth data. We introduce the concept of a robust solution to a measure driven inclusion, which extends to a multifunction setting interpretations of solutions to measure driven differential equations provided by Dal Maso and Rampazzo and others. Closure properties of sets of robust solutions are established, and notions of relaxation investigated. Implications for optimality conditions for impulsive control problems are pursued in a companion paper

    Relaxation of optimal control problems to equivalent convex programs

    Get PDF
    AbstractRelaxed controls induce convexity of the velocity sets in optimal control problems, permitting a general existence theory. Here we obtain complete convexity, of the set of control-trajectory pairs, by relaxing the problem constraints to admit certain measures on the product of the control and trajectory spaces. It is proved that these measures are just unit mixtures of control-trajectory pairs and that admitting them does not alter the minimum value of the control problems. This can be used to derive necessary and sufficient conditions for optimality of dynamic programming type

    Pontryagin type conditions for differential inclusions with free time

    Get PDF
    AbstractOptimality conditions for differential inclusion problems, due to Kaskosz and Lojasiewicz, involve a costate equation and a pointwise maximizing property of the optimal velocity, expressed in terms of a Carathéodory selection of the differential inclusion. Such conditions have been extended in various directions, notably to permit unilateral state constraints. Here we add to earlier extensions, principally by allowing free endtimes. This is accomplished even though the data are required to be merely measurable in the time variable. The results are obtained by applying recent optimality conditions for free time problems, involving a Hamiltonian inclusion, to an auxiliary problem and a simple limiting argument

    Value Functions and Transversality Conditions for Infinite-Horizon Optimal Control Problems

    Get PDF
    This paper investigates the relationship between the maximum principle with an infinite horizon and dynamic programming and sheds new light upon the role of the transversality condition at infinity as necessary and sufficient conditions for optimality with or without convexity assumptions. We first derive the nonsmooth maximum principle and the adjoint inclusion for the value function as necessary conditions for optimality that exhibit the relationship between the maximum principle and dynamic programming. We then present sufficiency theorems that are consistent with the strengthened maximum principle, employing the adjoint inequalities for the Hamiltonian and the value function. Synthesizing these results, necessary and sufficient conditions for optimality are provided for the convex case. In particular, the role of the transversality conditions at infinity is clarified

    Necessary conditions for optimal control problems with state constraints

    No full text
    Necessary conditions of optimality are derived for optimal control problems with pathwise state constraints, in which the dynamic constraint is modelled as a differential inclusion. The novel feature of the conditions is the unrestrictive nature of the hypotheses under which these conditions are shown to be valid. An Euler Lagrange type condition is obtained for problems where the multifunction associated with the dynamic constraint has values possibly unbounded, nonconvex sets and satisfies a mild 'one-sided' Lipschitz continuity hypothesis. We recover as a special case the sharpest available necessary conditions for state constraint free problems proved in a recent paper by Ioffe. For problems where the multifunction is convex valued it is shown that the necessary conditions are still valid when the one-sided Lipschitz hypothesis is replaced by a milder, local hypothesis. A recent 'dualization' theorem permits us to infer a strengthened form of the Hamiltonian inclusion from the Euler Lagrange condition. The necessary conditions for state constrained problems with convex valued multifunctions are derived under hypotheses on the dynamics which are significantly weaker than those invoked by Loewen and Rockafellar to achieve related necessary conditions for state constrained problems, and improve on available results in certain respects even when specialized to the state constraint free case. Proofs make use of recent 'decoupling' ideas of the authors, which reduce the optimization problem to one to which Pontryagin's maximum principle is applicable, and a refined penalization technique to deal with the dynamic constraint

    A finite-dimensional approximation method in optimal control theory

    No full text

    A simple 'finite approximations' proof of the Pontryagin maximum principle under reduced differentiability hypotheses

    No full text
    Traditional proofs of the Pontryagin maximum principle (PMP) require the continuous differentiability of the dynamics with respect to the state variable on a neighbourhood of the minimizing state trajectory, when arbitrary values of the control variable are inserted into the dynamic equations. Recently, Sussmann has drawn attention to the fact that the PMP remains valid when the dynamics are differentiable with respect to the state variable, merely when the minimizing control is inserted into the dynamic equations. This weakening of earlier hypotheses has been referred to as the Lojasiewicz refinement. Besides, it suffices to hypothesize that the dynamics are differentiable with respect to the state variable merely along the minimizing state trajectory. We show that these extensions of early versions of the PMP can be simply proved by finite approximations, application of a Lagrange multiplier rule in finite dimensions and passage to the limit. Furthermore, our analysis requires that the minimizer in question is merely a Pontryagin local minimizer, a weaker notion of 'local minimizer' than has previously been considered, in connection with these extensions

    Coextremals and the value function for control problems with data measurable in time

    Get PDF
    AbstractSuppose x∗(·) is a solution to an optimal control problem formulated in terms of a differential inclusion. Known first-order necessary conditions of optimality assert existence of a coextremal, or adjoint function, p(·), which together with x∗(·) satisfies the Hamiltonian inclusion and associated transversality condition. In this paper we interpret extremals in terms of generalized gradients of the value function V by demonstrating that p(·) can in addition be chosen to satisfy (p(t) ·ẋ∗(t), −p(t)) ϵ ∂V(t, x∗(t)), a.e. The hypothesis imposed are more or less the weakest under which the Hamiltonian inclusion condition is known to apply and permit, in particular, measurable time dependence of the data. The proof of the results relies on recent developments in Hamilton Jacobi theory applicable in such circumstances. An analogous result is proved for problems where the dynamics are modelled by a differential equation with control term

    Necessary Conditions for Free End-Time, Measurably Time Dependent Optimal Control Problems with State Constraints

    No full text
    Recently, necessary conditions have been derived for fixed-time optimal control problems with state constraints, formulated in terms of a differential inclusion, under very weak hypotheses on the data. These allow the multifunction describing admissible velocities to be unbounded and possibly nonconvex valued. This paper extends the earlier necessary conditions, to allow for free end-times. A notable feature of the new free end-time necessary conditions is that they cover problems with measurably time dependent data. For such problems, standard analytical techniques for deriving free-time necessary conditions, which depend on a transformation of the time variable, no longer work. Instead, we use variational methods based on the calculus of "essential values

    A simple 'finite approximations' proof of the Pontryagin maximum principle under reduced differentiability hypotheses

    No full text
    Traditional proofs of the Pontryagin maximum principle (PMP) require the continuous differentiability of the dynamics with respect to the state variable on a neighbourhood of the minimizing state trajectory, when arbitrary values of the control variable are inserted into the dynamic equations. Recently, Sussmann has drawn attention to the fact that the PMP remains valid when the dynamics are differentiable with respect to the state variable, merely when the minimizing control is inserted into the dynamic equations. This weakening of earlier hypotheses has been referred to as the Lojasiewicz refinement. Besides, it suffices to hypothesize that the dynamics are differentiable with respect to the state variable merely along the minimizing state trajectory. We show that these extensions of early versions of the PMP can be simply proved by finite approximations, application of a Lagrange multiplier rule in finite dimensions and passage to the limit. Furthermore, our analysis requires that the minimizer in question is merely a Pontryagin local minimizer, a weaker notion of 'local minimizer' than has previously been considered, in connection with these extensions
    corecore