1,464 research outputs found
Decentralized Implementation of Centralized Controllers for Interconnected Systems
Given a centralized controller associated with a linear time-invariant interconnected system, this paper is concerned with designing a parameterized decentralized
controller such that the state and input of
the system under the obtained decentralized controller can become arbitrarily close to those of the system under the given centralized controller, by tuning the controller's parameters. To this end, a two-level decentralized controller is designed, where the upper level captures the dynamics of the centralized closed-loop system, and the lower level is an observed-based sub-controller designed based on the new notion of structural initial value
observability. The proposed method can decentralize every generic centralized controller, provided the interconnected system satisfies very mild conditions. The efficacy of this work is elucidated by some numerical examples
System-level, Input-output and New Parameterizations of Stabilizing Controllers, and Their Numerical Computation
It is known that the set of internally stabilizing controller
is non-convex, but it admits convex
characterizations using certain closed-loop maps: a classical result is the
{Youla parameterization}, and two recent notions are the {system-level
parameterization} (SLP) and the {input-output parameterization} (IOP). In this
paper, we address the existence of new convex parameterizations and discuss
potential tradeoffs of each parametrization in different scenarios. Our main
contributions are: 1) We first reveal that only four groups of stable
closed-loop transfer matrices are equivalent to internal stability: one of them
is used in the SLP, another one is used in the IOP, and the other two are new,
leading to two new convex parameterizations of . 2)
We then investigate the properties of these parameterizations after imposing
the finite impulse response (FIR) approximation, revealing that the IOP has the
best ability of approximating given FIR
constraints. 3) These four parameterizations require no \emph{a priori}
doubly-coprime factorization of the plant, but impose a set of equality
constraints. However, these equality constraints will never be satisfied
exactly in numerical computation. We prove that the IOP is numerically robust
for open-loop stable plants, in the sense that small mismatches in the equality
constraints do not compromise the closed-loop stability. The SLP is known to
enjoy numerical robustness in the state feedback case; here, we show that
numerical robustness of the four-block SLP controller requires case-by-case
analysis in the general output feedback case.Comment: 20 pages; 5 figures. Added extensions on numerial computation and
robustness of closed-loop parameterization
Privacy-Preserving Distributed Optimization via Subspace Perturbation: A General Framework
As the modern world becomes increasingly digitized and interconnected,
distributed signal processing has proven to be effective in processing its
large volume of data. However, a main challenge limiting the broad use of
distributed signal processing techniques is the issue of privacy in handling
sensitive data. To address this privacy issue, we propose a novel yet general
subspace perturbation method for privacy-preserving distributed optimization,
which allows each node to obtain the desired solution while protecting its
private data. In particular, we show that the dual variables introduced in each
distributed optimizer will not converge in a certain subspace determined by the
graph topology. Additionally, the optimization variable is ensured to converge
to the desired solution, because it is orthogonal to this non-convergent
subspace. We therefore propose to insert noise in the non-convergent subspace
through the dual variable such that the private data are protected, and the
accuracy of the desired solution is completely unaffected. Moreover, the
proposed method is shown to be secure under two widely-used adversary models:
passive and eavesdropping. Furthermore, we consider several distributed
optimizers such as ADMM and PDMM to demonstrate the general applicability of
the proposed method. Finally, we test the performance through a set of
applications. Numerical tests indicate that the proposed method is superior to
existing methods in terms of several parameters like estimated accuracy,
privacy level, communication cost and convergence rate
- …