54,739 research outputs found
Model Reduction of Multi-Dimensional and Uncertain Systems
We present model reduction methods with guaranteed error bounds for systems represented by a Linear Fractional Transformation (LFT) on a repeated scalar uncertainty structure. These reduction methods can be interpreted either as doing state order reduction for multi-dimensionalsystems, or as uncertainty simplification in the case of uncertain systems, and are based on finding solutions to a pair of Linear Matrix Inequalities (LMIs). A related necessary and sufficient condition for the exact reducibility of stable uncertain systems is also presented
A Concentration-Based Approach for Optimizing the Estimation Performance in Stochastic Sensor Selection
In this work, we consider a sensor selection drawn at random by a sampling
with replacement policy for a linear time-invariant dynamical system subject to
process and measurement noise. We employ the Kalman filter to estimate the
state of the system. However, the statistical properties of the filter are not
deterministic due to the stochastic selection of sensors. As a consequence, we
derive concentration inequalities to bound the estimation error covariance of
the Kalman filter in the semi-definite sense. Concentration inequalities
provide a framework for deriving semi-definite bounds that hold in a
probabilistic sense. Our main contributions are three-fold. First, we develop
algorithmic tools to aid in the implementation of a matrix concentration
inequality. Second, we derive concentration-based bounds for three types of
stochastic selections. Third, we propose a polynomial-time procedure for
finding a sampling distribution that indirectly minimizes the maximum
eigenvalue of the estimation error covariance. Our proposed sampling policy is
also shown to empirically outperform three other sampling policies: uniform,
deterministic greedy, and randomized greedy
Oracle Inequalities and Optimal Inference under Group Sparsity
We consider the problem of estimating a sparse linear regression vector
under a gaussian noise model, for the purpose of both prediction and
model selection. We assume that prior knowledge is available on the sparsity
pattern, namely the set of variables is partitioned into prescribed groups,
only few of which are relevant in the estimation process. This group sparsity
assumption suggests us to consider the Group Lasso method as a means to
estimate . We establish oracle inequalities for the prediction and
estimation errors of this estimator. These bounds hold under a
restricted eigenvalue condition on the design matrix. Under a stronger
coherence condition, we derive bounds for the estimation error for mixed
-norms with . When , this result implies
that a threshold version of the Group Lasso estimator selects the sparsity
pattern of with high probability. Next, we prove that the rate of
convergence of our upper bounds is optimal in a minimax sense, up to a
logarithmic factor, for all estimators over a class of group sparse vectors.
Furthermore, we establish lower bounds for the prediction and
estimation errors of the usual Lasso estimator. Using this result, we
demonstrate that the Group Lasso can achieve an improvement in the prediction
and estimation properties as compared to the Lasso.Comment: 37 page
Extended balancing of continuous LTI systems:A structure-preserving approach
In this paper, we treat extended balancing for continuous-time linear time-invariant systems. We take a dissipativity perspective, thus resulting in a characterization in terms of linear matrix inequalities. This perspective is useful for determining a priori error bounds. In addition, we address the problem of structure-preserving model reduction of the subclass of port-Hamiltonian systems. We establish sufficient conditions to ensure that the reduced-order model preserves a port-Hamiltonian structure. Moreover, we show that the use of extended Gramians can be exploited to get a small error bound and, possibly, to preserve a physical interpretation for the reduced-order model. We illustrate the results with a large-scale mechanical system example. Furthermore, we show how to interpret a reduced-order model of an electrical circuit again as a lower-dimensional electrical circuit
Lecture 14: Randomized Algorithms for Least Squares Problems
The emergence of massive data sets, over the past twenty or so years, has lead to the development of Randomized Numerical Linear Algebra. Randomized matrix algorithms perform random sketching and sampling of rows or columns, in order to reduce the problem dimension or compute low-rank approximations. We review randomized algorithms for the solution of least squares/regression problems, based on row sketching from the left, or column sketching from the right. These algorithms tend to be efficient and accurate on matrices that have many more rows than columns. We present probabilistic bounds for the amount of sampling required to achieve a user-specified error tolerance. Along the way we illustrate important concepts from numerical analysis (conditioning and pre-conditioning), probability (coherence, concentration inequalities), and statistics (sampling and leverage scores). Numerical experiments illustrate that the bounds are informative even for small problem dimensions and stringent success probabilities. To stress-test the bounds, we present algorithms that generate \u27adversarial\u27 matrices\u27 for user-specified coherence and leverage scores. If time permits, we discuss the additional effect of uncertainties from the underlying Gaussian linear model in a regression problem
Optimal object configurations to minimize the positioning error in visual servoing
Image noise unavoidably affects the available image points that are used in visual-servoing schemes to steer a robot end-effector toward a desired location. As a consequence, letting the image points in the current view converge to those in the desired view does not ensure that the camera converges accurately to the desired location. This paper investigates the selection of object configurations to minimize the worst-case positioning error due to the presence of image noise. In particular, a strategy based on linear matrix inequalities (LMIs) and barrier functions is proposed to compute upper and lower bounds of this error for a given maximum error of the image points. This strategy can be applied to problems such as selecting an optimal subset of object points or determining an optimal position of an object in the scene. Some examples illustrate the use of the proposed strategy in such problems. © 2010 IEEE.published_or_final_versio
Recommended from our members
Reliable H∞ filtering for discrete time-delay systems with randomly occurred nonlinearities via delay-partitioning method
The official published version can be found at the link below.In this paper, the reliable H∞ filtering problem is investigated for a class of uncertain discrete time-delay systems with randomly occurred nonlinearities (RONs) and sensor failures. RONs are introduced to model a class of sector-like nonlinearities that occur in a probabilistic way according to a Bernoulli distributed white sequence with a known conditional probability. The failures of sensors are quantified by a variable varying in a given interval. The time-varying delay is unknown with given lower and upper bounds. The aim of the addressed reliable H∞ filtering problem is to design a filter such that, for all possible sensor failures, RONs, time-delays as well as admissible parameter uncertainties, the filtering error dynamics is asymptotically mean-square stable and also achieves a prescribed H∞ performance level. Sufficient conditions for the existence of such a filter are obtained by using a new Lyapunov–Krasovskii functional and delay-partitioning technique. The filter gains are characterized in terms of the solution to a set of linear matrix inequalities (LMIs). A numerical example is given to demonstrate the effectiveness of the proposed design approach
- …