899 research outputs found
A Bayesian approach to robust identification: application to fault detection
In the Control Engineering field, the so-called Robust Identification techniques deal with the problem of obtaining not only a nominal model of the plant, but also an estimate of the uncertainty associated to the nominal model. Such model of uncertainty is typically characterized as a region in the parameter space or as an uncertainty band around the frequency response of the nominal model.
Uncertainty models have been widely used in the design of robust controllers and, recently, their use in model-based fault detection procedures is increasing. In this later case, consistency between new measurements and the uncertainty region is checked. When an inconsistency is found, the existence of a fault is decided.
There exist two main approaches to the modeling of model uncertainty: the deterministic/worst case methods and the stochastic/probabilistic methods. At present, there are a number of different methods, e.g., model error modeling, set-membership identification and non-stationary stochastic embedding. In this dissertation we summarize the main procedures and illustrate their results by means of several examples of the literature.
As contribution we propose a Bayesian methodology to solve the robust identification problem. The approach is highly unifying since many robust identification techniques can be interpreted as particular cases of the Bayesian framework. Also, the methodology can deal with non-linear structures such as the ones derived from the use of observers. The obtained Bayesian uncertainty models are used to detect faults in a quadruple-tank process and in a three-bladed wind turbine
Information and Decision Theoretic Approaches to Problems in Active Diagnosis.
In applications such as active learning or disease/fault diagnosis, one often encounters the problem of identifying an unknown object while minimizing the number of ``yes" or ``no" questions (queries) posed about that object. This problem has been commonly referred to as object/entity identification or active diagnosis in the literature. In this thesis, we consider several extensions of this fundamental problem that are motivated by practical considerations in real-world, time-critical identification tasks such as emergency response.
First, we consider the problem where the objects are partitioned into groups, and the goal is to identify only the group to which the object belongs. We then consider the case where the cost of identifying an object grows exponentially in the number of queries. To address these problems we show that a standard algorithm for object identification, known as the splitting algorithm or generalized binary search (GBS), may be viewed as a generalization of Shannon-Fano coding. We then extend this result to the group-based and the exponential cost settings, leading to new, improved algorithms.
We then study the problem of active diagnosis under persistent query noise. Previous work in this area either assumed that the noise is independent or that the underlying query noise distribution is completely known. We make no such assumptions, and introduce an algorithm that returns a ranked list of objects, such that the expected rank of the true object is optimized. Finally, we study the problem of active diagnosis where multiple objects are present, such as in disease/fault diagnosis. Current algorithms in this area have an exponential time complexity making them slow and intractable. We address this issue by proposing an extension of our rank-based approach to the multiple object scenario, where we optimize the area under the ROC curve of the rank-based output. The AUC criterion allows us to make a simplifying assumption that significantly reduces the complexity of active diagnosis (from exponential to near quadratic), with little or no compromise on the performance quality. Further, we demonstrate the performance of the proposed algorithms through extensive experiments on both synthetic and real world datasets.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/91606/1/gowtham_1.pd
Simple Approximations of Semialgebraic Sets and their Applications to Control
Many uncertainty sets encountered in control systems analysis and design can
be expressed in terms of semialgebraic sets, that is as the intersection of
sets described by means of polynomial inequalities. Important examples are for
instance the solution set of linear matrix inequalities or the Schur/Hurwitz
stability domains. These sets often have very complicated shapes (non-convex,
and even non-connected), which renders very difficult their manipulation. It is
therefore of considerable importance to find simple-enough approximations of
these sets, able to capture their main characteristics while maintaining a low
level of complexity. For these reasons, in the past years several convex
approximations, based for instance on hyperrect-angles, polytopes, or
ellipsoids have been proposed. In this work, we move a step further, and
propose possibly non-convex approximations , based on a small volume polynomial
superlevel set of a single positive polynomial of given degree. We show how
these sets can be easily approximated by minimizing the L1 norm of the
polynomial over the semialgebraic set, subject to positivity constraints.
Intuitively, this corresponds to the trace minimization heuristic commonly
encounter in minimum volume ellipsoid problems. From a computational viewpoint,
we design a hierarchy of linear matrix inequality problems to generate these
approximations, and we provide theoretically rigorous convergence results, in
the sense that the hierarchy of outer approximations converges in volume (or,
equivalently, almost everywhere and almost uniformly) to the original set. Two
main applications of the proposed approach are considered. The first one aims
at reconstruction/approximation of sets from a finite number of samples. In the
second one, we show how the concept of polynomial superlevel set can be used to
generate samples uniformly distributed on a given semialgebraic set. The
efficiency of the proposed approach is demonstrated by different numerical
examples
Linear Estimation in Interconnected Sensor Systems with Information Constraints
A ubiquitous challenge in many technical applications is to estimate an unknown state by means of data that stems from several, often heterogeneous sensor sources. In this book, information is interpreted stochastically, and techniques for the distributed processing of data are derived that minimize the error of estimates about the unknown state. Methods for the reconstruction of dependencies are proposed and novel approaches for the distributed processing of noisy data are developed
Linear Estimation in Interconnected Sensor Systems with Information Constraints
A ubiquitous challenge in many technical applications is to estimate an unknown state by means of data that stems from several, often heterogeneous sensor sources. In this book, information is interpreted stochastically, and techniques for the distributed processing of data are derived that minimize the error of estimates about the unknown state. Methods for the reconstruction of dependencies are proposed and novel approaches for the distributed processing of noisy data are developed
Recommended from our members
Reassessing the Paradigms of Statistical Model-Building
Statistical model-building is the science of constructing models from data and from information about the data-generation process, with the aim of analysing those data and drawing inference from that analysis. Many statistical tasks are undertaken during this analysis; they include classification, forecasting, prediction and testing. Model-building has assumed substantial importance, as new technologies enable data on highly complex phenomena to be gathered in very large quantities. This creates a demand for more complex models, and requires the model-building process itself to be adaptive. The word “paradigm” refers to philosophies, frameworks and methodologies for developing and interpreting statistical models, in the context of data, and applying them for inference. In order to solve contemporary statistical problems it is often necessary to combine techniques from previously separate paradigms. The workshop addressed model-building paradigms that are at the frontiers of modern statistical research. It tried to create synergies, by delineating the connections and collisions among different paradigms. It also endeavoured to shape the future evolution of paradigms
- …