54,739 research outputs found

    Model Reduction of Multi-Dimensional and Uncertain Systems

    Get PDF
    We present model reduction methods with guaranteed error bounds for systems represented by a Linear Fractional Transformation (LFT) on a repeated scalar uncertainty structure. These reduction methods can be interpreted either as doing state order reduction for multi-dimensionalsystems, or as uncertainty simplification in the case of uncertain systems, and are based on finding solutions to a pair of Linear Matrix Inequalities (LMIs). A related necessary and sufficient condition for the exact reducibility of stable uncertain systems is also presented

    A Concentration-Based Approach for Optimizing the Estimation Performance in Stochastic Sensor Selection

    Full text link
    In this work, we consider a sensor selection drawn at random by a sampling with replacement policy for a linear time-invariant dynamical system subject to process and measurement noise. We employ the Kalman filter to estimate the state of the system. However, the statistical properties of the filter are not deterministic due to the stochastic selection of sensors. As a consequence, we derive concentration inequalities to bound the estimation error covariance of the Kalman filter in the semi-definite sense. Concentration inequalities provide a framework for deriving semi-definite bounds that hold in a probabilistic sense. Our main contributions are three-fold. First, we develop algorithmic tools to aid in the implementation of a matrix concentration inequality. Second, we derive concentration-based bounds for three types of stochastic selections. Third, we propose a polynomial-time procedure for finding a sampling distribution that indirectly minimizes the maximum eigenvalue of the estimation error covariance. Our proposed sampling policy is also shown to empirically outperform three other sampling policies: uniform, deterministic greedy, and randomized greedy

    Oracle Inequalities and Optimal Inference under Group Sparsity

    Full text link
    We consider the problem of estimating a sparse linear regression vector β\beta^* under a gaussian noise model, for the purpose of both prediction and model selection. We assume that prior knowledge is available on the sparsity pattern, namely the set of variables is partitioned into prescribed groups, only few of which are relevant in the estimation process. This group sparsity assumption suggests us to consider the Group Lasso method as a means to estimate β\beta^*. We establish oracle inequalities for the prediction and 2\ell_2 estimation errors of this estimator. These bounds hold under a restricted eigenvalue condition on the design matrix. Under a stronger coherence condition, we derive bounds for the estimation error for mixed (2,p)(2,p)-norms with 1p1\le p\leq \infty. When p=p=\infty, this result implies that a threshold version of the Group Lasso estimator selects the sparsity pattern of β\beta^* with high probability. Next, we prove that the rate of convergence of our upper bounds is optimal in a minimax sense, up to a logarithmic factor, for all estimators over a class of group sparse vectors. Furthermore, we establish lower bounds for the prediction and 2\ell_2 estimation errors of the usual Lasso estimator. Using this result, we demonstrate that the Group Lasso can achieve an improvement in the prediction and estimation properties as compared to the Lasso.Comment: 37 page

    Extended balancing of continuous LTI systems:A structure-preserving approach

    Get PDF
    In this paper, we treat extended balancing for continuous-time linear time-invariant systems. We take a dissipativity perspective, thus resulting in a characterization in terms of linear matrix inequalities. This perspective is useful for determining a priori error bounds. In addition, we address the problem of structure-preserving model reduction of the subclass of port-Hamiltonian systems. We establish sufficient conditions to ensure that the reduced-order model preserves a port-Hamiltonian structure. Moreover, we show that the use of extended Gramians can be exploited to get a small error bound and, possibly, to preserve a physical interpretation for the reduced-order model. We illustrate the results with a large-scale mechanical system example. Furthermore, we show how to interpret a reduced-order model of an electrical circuit again as a lower-dimensional electrical circuit

    Lecture 14: Randomized Algorithms for Least Squares Problems

    Get PDF
    The emergence of massive data sets, over the past twenty or so years, has lead to the development of Randomized Numerical Linear Algebra. Randomized matrix algorithms perform random sketching and sampling of rows or columns, in order to reduce the problem dimension or compute low-rank approximations. We review randomized algorithms for the solution of least squares/regression problems, based on row sketching from the left, or column sketching from the right. These algorithms tend to be efficient and accurate on matrices that have many more rows than columns. We present probabilistic bounds for the amount of sampling required to achieve a user-specified error tolerance. Along the way we illustrate important concepts from numerical analysis (conditioning and pre-conditioning), probability (coherence, concentration inequalities), and statistics (sampling and leverage scores). Numerical experiments illustrate that the bounds are informative even for small problem dimensions and stringent success probabilities. To stress-test the bounds, we present algorithms that generate \u27adversarial\u27 matrices\u27 for user-specified coherence and leverage scores. If time permits, we discuss the additional effect of uncertainties from the underlying Gaussian linear model in a regression problem

    Optimal object configurations to minimize the positioning error in visual servoing

    Get PDF
    Image noise unavoidably affects the available image points that are used in visual-servoing schemes to steer a robot end-effector toward a desired location. As a consequence, letting the image points in the current view converge to those in the desired view does not ensure that the camera converges accurately to the desired location. This paper investigates the selection of object configurations to minimize the worst-case positioning error due to the presence of image noise. In particular, a strategy based on linear matrix inequalities (LMIs) and barrier functions is proposed to compute upper and lower bounds of this error for a given maximum error of the image points. This strategy can be applied to problems such as selecting an optimal subset of object points or determining an optimal position of an object in the scene. Some examples illustrate the use of the proposed strategy in such problems. © 2010 IEEE.published_or_final_versio
    corecore