371 research outputs found
Model Reduction of Multi-Dimensional and Uncertain Systems
We present model reduction methods with guaranteed error bounds for systems represented by a Linear Fractional Transformation (LFT) on a repeated scalar uncertainty structure. These reduction methods can be interpreted either as doing state order reduction for multi-dimensionalsystems, or as uncertainty simplification in the case of uncertain systems, and are based on finding solutions to a pair of Linear Matrix Inequalities (LMIs). A related necessary and sufficient condition for the exact reducibility of stable uncertain systems is also presented
Distributed Lagrangian Methods for Network Resource Allocation
Motivated by a variety of applications in control engineering and information
sciences, we study network resource allocation problems where the goal is to
optimally allocate a fixed amount of resource over a network of nodes. In these
problems, due to the large scale of the network and complicated
inter-connections between nodes, any solution must be implemented in parallel
and based only on local data resulting in a need for distributed algorithms. In
this paper, we propose a novel distributed Lagrangian method, which requires
only local computation and communication. Our focus is to understand the
performance of this algorithm on the underlying network topology. Specifically,
we obtain an upper bound on the rate of convergence of the algorithm as a
function of the size and the topology of the underlying network. The
effectiveness and applicability of the proposed method is demonstrated by its
use in solving the important economic dispatch problem in power systems,
specifically on the benchmark IEEE-14 and IEEE-118 bus systems
Estimator Selection: End-Performance Metric Aspects
Recently, a framework for application-oriented optimal experiment design has
been introduced. In this context, the distance of the estimated system from the
true one is measured in terms of a particular end-performance metric. This
treatment leads to superior unknown system estimates to classical experiment
designs based on usual pointwise functional distances of the estimated system
from the true one. The separation of the system estimator from the experiment
design is done within this new framework by choosing and fixing the estimation
method to either a maximum likelihood (ML) approach or a Bayesian estimator
such as the minimum mean square error (MMSE). Since the MMSE estimator delivers
a system estimate with lower mean square error (MSE) than the ML estimator for
finite-length experiments, it is usually considered the best choice in practice
in signal processing and control applications. Within the application-oriented
framework a related meaningful question is: Are there end-performance metrics
for which the ML estimator outperforms the MMSE when the experiment is
finite-length? In this paper, we affirmatively answer this question based on a
simple linear Gaussian regression example.Comment: arXiv admin note: substantial text overlap with arXiv:1303.428
On the convergence rate of distributed gradient methods for finite-sum optimization under communication delays
Motivated by applications in machine learning and statistics, we study
distributed optimization problems over a network of processors, where the goal
is to optimize a global objective composed of a sum of local functions. In
these problems, due to the large scale of the data sets, the data and
computation must be distributed over processors resulting in the need for
distributed algorithms. In this paper, we consider a popular distributed
gradient-based consensus algorithm, which only requires local computation and
communication. An important problem in this area is to analyze the convergence
rate of such algorithms in the presence of communication delays that are
inevitable in distributed systems. We prove the convergence of the
gradient-based consensus algorithm in the presence of uniform, but possibly
arbitrarily large, communication delays between the processors. Moreover, we
obtain an upper bound on the rate of convergence of the algorithm as a function
of the network size, topology, and the inter-processor communication delays
Model reduction of behavioural systems
We consider model reduction of uncertain behavioural models. Machinery for gap-metric model reduction and multidimensional model reduction using linear matrix inequalities is extended to these behavioural models. The goal is a systematic method for reducing the complexity of uncertain components in hierarchically developed models which approximates the behavior of the full-order system. This paper focuses on component model reduction that preserves stability under interconnection
Reducing uncertain systems and behaviors
This paper considers the problem of reducing the dimension of a model for an uncertain system whilst bounding the resulting error. Model reduction methods with guaranteed upper error bounds have previously been established for uncertain systems described by a state-space type realization; specifically, by a linear fractional transformation (LFT) of a constant realization matrix over a structured uncertainty operator. In contrast to traditional 1-D model reduction where upper bounds on reduction are matched with comparable lower bounds, in the uncertain system problem there have previously been no lower bounds established. The computation of both upper and lower bounds is discussed in this paper, including a discussion of the use of Hankel-like matrices. These model reduction methods and error bound computations are then discussed in the context of kernel representations of behavioral uncertain systems
Realizations of uncertain systems and formal power series
Rational functions of several noncommuting indeterminates arise naturally in robust control when studying systems with structured uncertainty. Linear fractional transformations (LFTs) provide a convenient way of obtaining realizations of such systems and a complete realization theory of LFTs is emerging. This paper establishes connections between a minimal LFT realization and minimal realizations of a formal power series, which have been studied extensively in a variety of disciplines. The result is a fairly complete generalization of standard minimal realization theory for linear systems to the formal power series and LFT setting
Mixed µ upper bound computation
Computation of the mixed real and complex µ upper bound expressed in terms of linear matrix inequalities (LMIs) is considered. Two existing methods are used, the Osborne (1960) method for balancing matrices, and the method of centers as proposed by Boyd and El Ghaoui (1991). These methods are compared, and a hybrid algorithm that combines the best features of each is proposed. Numerical experiments suggest that this hybrid algorithm provides an efficient method to compute the upper bound for mixed µ
- …