85,550 research outputs found

    On the Convergence Time of a Natural Dynamics for Linear Programming

    Get PDF
    We consider a system of nonlinear ordinary differential equations for the solution of linear programming (LP) problems that was first proposed in the mathematical biology literature as a model for the foraging behavior of acellular slime mold Physarum polycephalum, and more recently considered as a method to solve LP instances. We study the convergence time of the continuous Physarum dynamics in the context of the linear programming problem, and derive a new time bound to approximate optimality that depends on the relative entropy between projected versions of the optimal point and of the initial point. The bound scales logarithmically with the LP cost coefficients and linearly with the inverse of the relative accuracy, establishing the efficiency of the dynamics for arbitrary LP instances with positive costs

    Successive Convexification of Non-Convex Optimal Control Problems and Its Convergence Properties

    Full text link
    This paper presents an algorithm to solve non-convex optimal control problems, where non-convexity can arise from nonlinear dynamics, and non-convex state and control constraints. This paper assumes that the state and control constraints are already convex or convexified, the proposed algorithm convexifies the nonlinear dynamics, via a linearization, in a successive manner. Thus at each succession, a convex optimal control subproblem is solved. Since the dynamics are linearized and other constraints are convex, after a discretization, the subproblem can be expressed as a finite dimensional convex programming subproblem. Since convex optimization problems can be solved very efficiently, especially with custom solvers, this subproblem can be solved in time-critical applications, such as real-time path planning for autonomous vehicles. Several safe-guarding techniques are incorporated into the algorithm, namely virtual control and trust regions, which add another layer of algorithmic robustness. A convergence analysis is presented in continuous- time setting. By doing so, our convergence results will be independent from any numerical schemes used for discretization. Numerical simulations are performed for an illustrative trajectory optimization example.Comment: Updates: corrected wordings for LICQ. This is the full version. A brief version of this paper is published in 2016 IEEE 55th Conference on Decision and Control (CDC). http://ieeexplore.ieee.org/document/7798816

    Robust distributed linear programming

    Full text link
    This paper presents a robust, distributed algorithm to solve general linear programs. The algorithm design builds on the characterization of the solutions of the linear program as saddle points of a modified Lagrangian function. We show that the resulting continuous-time saddle-point algorithm is provably correct but, in general, not distributed because of a global parameter associated with the nonsmooth exact penalty function employed to encode the inequality constraints of the linear program. This motivates the design of a discontinuous saddle-point dynamics that, while enjoying the same convergence guarantees, is fully distributed and scalable with the dimension of the solution vector. We also characterize the robustness against disturbances and link failures of the proposed dynamics. Specifically, we show that it is integral-input-to-state stable but not input-to-state stable. The latter fact is a consequence of a more general result, that we also establish, which states that no algorithmic solution for linear programming is input-to-state stable when uncertainty in the problem data affects the dynamics as a disturbance. Our results allow us to establish the resilience of the proposed distributed dynamics to disturbances of finite variation and recurrently disconnected communication among the agents. Simulations in an optimal control application illustrate the results

    Singularly perturbed forward-backward stochastic differential equations: application to the optimal control of bilinear systems

    Get PDF
    We study linear-quadratic stochastic optimal control problems with bilinear state dependence for which the underlying stochastic differential equation (SDE) consists of slow and fast degrees of freedom. We show that, in the same way in which the underlying dynamics can be well approximated by a reduced order effective dynamics in the time scale limit (using classical homogenziation results), the associated optimal expected cost converges in the time scale limit to an effective optimal cost. This entails that we can well approximate the stochastic optimal control for the whole system by the reduced order stochastic optimal control, which is clearly easier to solve because of lower dimensionality. The approach uses an equivalent formulation of the Hamilton-Jacobi-Bellman (HJB) equation, in terms of forward-backward SDEs (FBSDEs). We exploit the efficient solvability of FBSDEs via a least squares Monte Carlo algorithm and show its applicability by a suitable numerical example

    Design optimization applied in structural dynamics

    Get PDF
    This paper introduces the design optimization strategies, especially for structures which have dynamic constraints. Design optimization involves first the modeling and then the optimization of the problem. Utilizing the Finite Element (FE) model of a structure directly in an optimization process requires a long computation time. Therefore the Backpropagation Neural Networks (NNs) are introduced as a so called surrogate model for the FE model. Optimization techniques mentioned in this study cover the Genetic Algorithm (GA) and the Sequential Quadratic Programming (SQP) methods. For the applications of the introduced techniques, a multisegment cantilever beam problem under the constraints of its first and second natural frequency has been selected and solved using four different approaches
    corecore