29 research outputs found

    Distributed Control of Spatially Reversible Interconnected Systems with Boundary Conditions

    Get PDF
    We present a class of spatially interconnected systems with boundary conditions that have close links with their spatially invariant extensions. In particular, well-posedness, stability, and performance of the extension imply the same characteristics for the actual, finite extent system. In turn, existing synthesis methods for control of spatially invariant systems can be extended to this class. The relation between the two kinds of systems is proved using ideas based on the "method of images" of partial differential equations theory and uses symmetry properties of the interconnection as a key tool

    A Faithful Distributed Implementation of Dual Decomposition and Average Consensus Algorithms

    Full text link
    We consider large scale cost allocation problems and consensus seeking problems for multiple agents, in which agents are suggested to collaborate in a distributed algorithm to find a solution. If agents are strategic to minimize their own individual cost rather than the global social cost, they are endowed with an incentive not to follow the intended algorithm, unless the tax/subsidy mechanism is carefully designed. Inspired by the classical Vickrey-Clarke-Groves mechanism and more recent algorithmic mechanism design theory, we propose a tax mechanism that incentivises agents to faithfully implement the intended algorithm. In particular, a new notion of asymptotic incentive compatibility is introduced to characterize a desirable property of such class of mechanisms. The proposed class of tax mechanisms provides a sequence of mechanisms that gives agents a diminishing incentive to deviate from suggested algorithm.Comment: 8 page

    Task Release Control for Decision Making Queues

    Full text link
    We consider the optimal duration allocation in a decision making queue. Decision making tasks arrive at a given rate to a human operator. The correctness of the decision made by human evolves as a sigmoidal function of the duration allocated to the task. Each task in the queue loses its value continuously. We elucidate on this trade-off and determine optimal policies for the human operator. We show the optimal policy requires the human to drop some tasks. We present a receding horizon optimization strategy, and compare it with the greedy policy.Comment: 8 pages, Submitted to American Controls Conference, San Francisco, CA, June 201

    Learn and Control while Switching: with Guaranteed Stability and Sublinear Regret

    Full text link
    Over-actuated systems often make it possible to achieve specific performances by switching between different subsets of actuators. However, when the system parameters are unknown, transferring authority to different subsets of actuators is challenging due to stability and performance efficiency concerns. This paper presents an efficient algorithm to tackle the so-called "learn and control while switching between different actuating modes" problem in the Linear Quadratic (LQ) setting. Our proposed strategy is constructed upon Optimism in the Face of Uncertainty (OFU) based algorithm equipped with a projection toolbox to keep the algorithm efficient, regret-wise. Along the way, we derive an optimum duration for the warm-up phase, thanks to the existence of a stabilizing neighborhood. The stability of the switched system is also guaranteed by designing a minimum average dwell time. The proposed strategy is proved to have a regret bound of Oˉ(T)+O(nsT)\mathcal{\bar{O}}\big(\sqrt{T}\big)+\mathcal{O}\big(ns\sqrt{T}\big) in horizon TT with (ns)(ns) number of switches, provably outperforming naively applying the basic OFU algorithm

    The Price of Distributed Design in Optimal Control

    Get PDF
    We study control design strategies which, when presented with a plant made of interconnected subsystems, construct a sub-controller for each one of them using only a model of this particular subsystem. We prove that, for a class of linear time-invariant, discrete-time systems, any such distributed control strategy must have a worst-case performance at least twice the optimal. The best distributed design strategy is one that results in a deadbeat controller for every plant

    Stability of digitally interconnected linear systems

    No full text
    Abstract-A sufficient condition for stability of linear subsystems interconnected by digitized signals is presented. There is a digitizer for each linear subsystem that periodically samples an input signal and produces an output that is quantized and saturated. The output of the digitizer is then fed as an input (in the usual sense) to the linear subsystem. Due to digitization, each subsystem behaves as a switched affine system, where state-dependent switches are induced by the digitizer. For each quantization region, a storage function is computed for each subsystem by solving appropriate linear matrix inequalities (LMIs), and the sum of these storage functions is a Lyapunov function for the interconnected system. Finally, using a condition on the sampling period, we specify a subset of the unsaturated state space from which all executions of the interconnected system reach a neighborhood of the quantization region containing the origin. The sampling period proves to be pivotal-if it is too small, then a dwell-time argument cannot be used to establish convergence, while if it is too large, an unstable subsystem may not receive timely-enough inputs to avoid diverging

    Stability of digitally interconnected linear systems

    Get PDF
    Abstract-A sufficient condition for stability of linear subsystems interconnected by digitized signals is presented. There is a digitizer for each linear subsystem that periodically samples an input signal and produces an output that is quantized and saturated. The output of the digitizer is then fed as an input (in the usual sense) to the linear subsystem. Due to digitization, each subsystem behaves as a switched affine system, where state-dependent switches are induced by the digitizer. For each quantization region, a storage function is computed for each subsystem by solving appropriate linear matrix inequalities (LMIs), and the sum of these storage functions is a Lyapunov function for the interconnected system. Finally, using a condition on the sampling period, we specify a subset of the unsaturated state space from which all executions of the interconnected system reach a neighborhood of the quantization region containing the origin. The sampling period proves to be pivotal-if it is too small, then a dwell-time argument cannot be used to establish convergence, while if it is too large, an unstable subsystem may not receive timely-enough inputs to avoid diverging
    corecore