386 research outputs found

    Regularized Jacobi iteration for decentralized convex optimization with separable constraints

    Full text link
    We consider multi-agent, convex optimization programs subject to separable constraints, where the constraint function of each agent involves only its local decision vector, while the decision vectors of all agents are coupled via a common objective function. We focus on a regularized variant of the so called Jacobi algorithm for decentralized computation in such problems. We first consider the case where the objective function is quadratic, and provide a fixed-point theoretic analysis showing that the algorithm converges to a minimizer of the centralized problem. Moreover, we quantify the potential benefits of such an iterative scheme by comparing it against a scaled projected gradient algorithm. We then consider the general case and show that all limit points of the proposed iteration are optimal solutions of the centralized problem. The efficacy of the proposed algorithm is illustrated by applying it to the problem of optimal charging of electric vehicles, where, as opposed to earlier approaches, we show convergence to an optimal charging scheme for a finite, possibly large, number of vehicles

    Sampling-based optimal kinodynamic planning with motion primitives

    Full text link
    This paper proposes a novel sampling-based motion planner, which integrates in RRT* (Rapidly exploring Random Tree star) a database of pre-computed motion primitives to alleviate its computational load and allow for motion planning in a dynamic or partially known environment. The database is built by considering a set of initial and final state pairs in some grid space, and determining for each pair an optimal trajectory that is compatible with the system dynamics and constraints, while minimizing a cost. Nodes are progressively added to the tree {of feasible trajectories in the RRT* by extracting at random a sample in the gridded state space and selecting the best obstacle-free motion primitive in the database that joins it to an existing node. The tree is rewired if some nodes can be reached from the new sampled state through an obstacle-free motion primitive with lower cost. The computationally more intensive part of motion planning is thus moved to the preliminary offline phase of the database construction at the price of some performance degradation due to gridding. Grid resolution can be tuned so as to compromise between (sub)optimality and size of the database. The planner is shown to be asymptotically optimal as the grid resolution goes to zero and the number of sampled states grows to infinity

    Tracking-ADMM for Distributed Constraint-Coupled Optimization

    Get PDF
    We consider constraint-coupled optimization problems in which agents of a network aim to cooperatively minimize the sum of local objective functions subject to individual constraints and a common linear coupling constraint. We propose a novel optimization algorithm that embeds a dynamic average consensus protocol in the parallel Alternating Direction Method of Multipliers (ADMM) to design a fully distributed scheme for the considered set-up. The dynamic average mechanism allows agents to track the time-varying coupling constraint violation (at the current solution estimates). The tracked version of the constraint violation is then used to update local dual variables in a consensus-based scheme mimicking a parallel ADMM step. Under convexity, we prove that all limit points of the agents' primal solution estimates form an optimal solution of the constraint-coupled (primal) problem. The result is proved by means of a Lyapunov-based analysis simultaneously showing consensus of the dual estimates to a dual optimal solution, convergence of the tracking scheme and asymptotic optimality of primal iterates. A numerical study on optimal charging schedule of plug-in electric vehicles corroborates the theoretical results.Comment: 14 pages, 2 figures, submitted to Automatic

    Model reduction of discrete time hybrid systems: A structural approach based on observability

    Get PDF
    This paper addresses model reduction for discrete time hybrid systems that are described by a Mixed Logical Dynamical (MLD) model. The goal is to simplify the MLD model while preserving its input/output behavior. This is useful when considering a reachability property that depends on the output and should be enforced by appropriately setting the input. The proposed procedure for model reduction rests on the analysis of the structure of the MLD system and on its observability properties. It is also applicable to PieceWise Affine (PWA) systems that can be equivalently represented as MLD systems. In the case of PWA systems, mode merging can be adopted to further simplify the model

    Minimum resource commitment for reachability specifications in a discrete time linear setting.

    Get PDF
    This paper addresses control input design for a discrete time linear system. The goal is to satisfy a reachability specification and, at the same time, minimize the number of inputs that need to be set (influential inputs). To this purpose, we introduce an appropriate input parametrization so that, depending on the parameter values, some of the inputs act as control variables, while the others are treated as disturbances and can take an arbitrary value in their range. We then enforce the specification while maximizing the number of disturbance inputs. Two approaches are developed: one based on an open loop scheme and one based on a compensation scheme. In the former, we end up solving a linear program. In the latter, the parametrization is extended so as to allow the influential inputs to depend on the non-influential ones, and the problem is reduced to a mixed integer linear program. A comparison between the two approaches is carried out, showing the superiority of the latter. Possible applications to system design and security of networked control systems are briefly discussed in the introduction

    An Iterative Scheme for the Approximate Linear Programming Solution to the Optimal Control of a Markov Decision Process

    Get PDF
    This paper addresses the computational issues involved in the solution to an infinite-horizon optimal control problem for a Markov Decision Process (MDP) with a continuous state component and a discrete control input. The optimal Markov policy for the MDP can be determined based on the fixed point solution to the Bellman equation, which can be rephrased as a constrained Linear Program (LP) with an infinite number of constraints and an infinite dimensional optimization variable (the optimal value function). To compute an (approximate) solution to the LP, an iterative randomized scheme is proposed where the optimization variable is expressed as a linear combination of basis functions in a given class: at each iteration, the resulting semi-infinite LP is solved via constraint sampling, whereas the number of basis functions is progressively increased through the iterations so as to meet some performance goal. The effectiveness of the proposed scheme is shown on a multi-room heating system example
    • …
    corecore