38,318 research outputs found

    Value function for regional control problems via dynamic programming and Pontryagin maximum principle

    Get PDF
    In this paper we focus on regional deterministic optimal control problems, i.e., problems where the dynamics and the cost functional may be different in several regions of the state space and present discontinuities at their interface. Under the assumption that optimal trajectories have a locally finite number of switchings (no Zeno phenomenon), we use the duplication technique to show that the value function of the regional optimal control problem is the minimum over all possible structures of trajectories of value functions associated with classical optimal control problems settled over fixed structures, each of them being the restriction to some submanifold of the value function of a classical optimal control problem in higher dimension.The lifting duplication technique is thus seen as a kind of desingularization of the value function of the regional optimal control problem. In turn, we extend to regional optimal control problems the classical sensitivity relations and we prove that the regularity of this value function is the same (i.e., is not more degenerate) than the one of the higher-dimensional classical optimal control problem that lifts the problem

    A Survey on Continuous Time Computations

    Full text link
    We provide an overview of theories of continuous time computation. These theories allow us to understand both the hardness of questions related to continuous time dynamical systems and the computational power of continuous time analog models. We survey the existing models, summarizing results, and point to relevant references in the literature

    Improved dynamical particle swarm optimization method for structural dynamics

    Get PDF
    A methodology to the multiobjective structural design of buildings based on an improved particle swarm optimization algorithm is presented, which has proved to be very efficient and robust in nonlinear problems and when the optimization objectives are in conflict. In particular, the behaviour of the particle swarm optimization (PSO) classical algorithm is improved by dynamically adding autoadaptive mechanisms that enhance the exploration/exploitation trade-off and diversity of the proposed algorithm, avoiding getting trapped in local minima. A novel integrated optimization system was developed, called DI-PSO, to solve this problem which is able to control and even improve the structural behaviour under seismic excitations. In order to demonstrate the effectiveness of the proposed approach, the methodology is tested against some benchmark problems. Then a 3-story-building model is optimized under different objective cases, concluding that the improved multiobjective optimization methodology using DI-PSO is more efficient as compared with those designs obtained using single optimization.Peer ReviewedPostprint (published version

    An hybrid system approach to nonlinear optimal control problems

    Full text link
    We consider a nonlinear ordinary differential equation and want to control its behavior so that it reaches a target by minimizing a cost function. Our approach is to use hybrid systems to solve this problem: the complex dynamic is replaced by piecewise affine approximations which allow an analytical resolution. The sequence of affine models then forms a sequence of states of a hybrid automaton. Given a sequence of states, we introduce an hybrid approximation of the nonlinear controllable domain and propose a new algorithm computing a controllable, piecewise convex approximation. The same way the nonlinear optimal control problem is replaced by an hybrid piecewise affine one. Stating a hybrid maximum principle suitable to our hybrid model, we deduce the global structure of the hybrid optimal control steering the system to the target

    An Efficient Policy Iteration Algorithm for Dynamic Programming Equations

    Full text link
    We present an accelerated algorithm for the solution of static Hamilton-Jacobi-Bellman equations related to optimal control problems. Our scheme is based on a classic policy iteration procedure, which is known to have superlinear convergence in many relevant cases provided the initial guess is sufficiently close to the solution. In many cases, this limitation degenerates into a behavior similar to a value iteration method, with an increased computation time. The new scheme circumvents this problem by combining the advantages of both algorithms with an efficient coupling. The method starts with a value iteration phase and then switches to a policy iteration procedure when a certain error threshold is reached. A delicate point is to determine this threshold in order to avoid cumbersome computation with the value iteration and, at the same time, to be reasonably sure that the policy iteration method will finally converge to the optimal solution. We analyze the methods and efficient coupling in a number of examples in dimension two, three and four illustrating its properties
    corecore