1,913 research outputs found
Robust fault detection using consistency techniques for uncertainty handling
Often practical performance of analytical redundancy for fault detection and diagnosis is decreased by uncertainties prevailing not only in the system model, but also in the measurements. In this paper, the problem of fault detection is stated as a constraint satisfaction problem over continuous domains with a big number of variables and constraints. This problem can be solved using modal interval analysis and consistency techniques. Consistency techniques are then shown to be particularly efficient to check the consistency of the analytical redundancy relations (ARRs), dealing with uncertain measurements and parameters. Through the work presented in this paper, it can be observed that consistency techniques can be used to increase the performance of a robust fault detection tool, which is based on interval arithmetic. The proposed method is illustrated using a nonlinear dynamic model of a hydraulic syste
Recommended from our members
Some topics in the analysis of spherical data.
This thesis is concerned with the statistical analysis of directions in 3 dimensions. An important reference is the book by Mardia (1972). At the time of publication of this book, the repertoire of spherical distributions used for modelling purposes was rather limited, and there was clearly a need to investigate other possibilities. In the last few years there has been some interest in the 8 parameter family of distributions mentioned by Mardia (1975), which is known as the Fisher-Bingham family.
In Chapter 1 an outline of the thesis is given. The Fisher-Bingham family is discussed in Chapter 2, and an effective method for calculating the normalising constant is presented. Attention is then focussed on an interesting 6 parameter subfamily, and a simple rule is given for classifying the distributions in this subfamily according to type (unimodal, bimodal, ’closed curve'). Estimation and inference are then discussed, and the Chapter is concluded with a numerical example.
In Chapter 3, the family of bimodal distributions presented in Wood (1982) is described. Other bimodal models are also mentioned briefly.
The problem of simulating Fisher-Bingham distributions is considered in Chapter 4. Some inequalities are derived and then used to construct suitable envelopes so that an acceptance-rejection procedure can be used.
In Chapter 5, the robust estimation of concentration for a Fisher distribution is considered, and L-estimators of the type suggested by Fisher (1982) are investigated. It is shown that the best of these estimators have desirable all-round properties. Indications are also given as to how these ideas can be adapted to other contexts.
Possibilities for further research are mentioned in Chapter 6
Optimal Uncertainty Quantification
We propose a rigorous framework for Uncertainty Quantification (UQ) in which
the UQ objectives and the assumptions/information set are brought to the
forefront. This framework, which we call \emph{Optimal Uncertainty
Quantification} (OUQ), is based on the observation that, given a set of
assumptions and information about the problem, there exist optimal bounds on
uncertainties: these are obtained as values of well-defined optimization
problems corresponding to extremizing probabilities of failure, or of
deviations, subject to the constraints imposed by the scenarios compatible with
the assumptions and information. In particular, this framework does not
implicitly impose inappropriate assumptions, nor does it repudiate relevant
information. Although OUQ optimization problems are extremely large, we show
that under general conditions they have finite-dimensional reductions. As an
application, we develop \emph{Optimal Concentration Inequalities} (OCI) of
Hoeffding and McDiarmid type. Surprisingly, these results show that
uncertainties in input parameters, which propagate to output uncertainties in
the classical sensitivity analysis paradigm, may fail to do so if the transfer
functions (or probability distributions) are imperfectly known. We show how,
for hierarchical structures, this phenomenon may lead to the non-propagation of
uncertainties or information across scales. In addition, a general algorithmic
framework is developed for OUQ and is tested on the Caltech surrogate model for
hypervelocity impact and on the seismic safety assessment of truss structures,
suggesting the feasibility of the framework for important complex systems. The
introduction of this paper provides both an overview of the paper and a
self-contained mini-tutorial about basic concepts and issues of UQ.Comment: 90 pages. Accepted for publication in SIAM Review (Expository
Research Papers). See SIAM Review for higher quality figure
Recommended from our members
Simulation of dynamic systems with uncertain parameters
This dissertation describes numerical methods for representation and
simulation of dynamic systems with time invariant uncertain parameters. Simulation is defined as computing a boundary of the system response that contains all the possible behaviors of an uncertain system. This problem features
many challenges, especially those associated with minimizing the computational cost due to global optimization. To reduce computational cost, an
approximation or surrogate of the original system model is constructed by employing Moving Least Square (MLS) Response Surface Method for non-convex
global optimization. For more complicated systems, a gradient enhanced moving least square (GEMLS) response surface is used to construct the surrogate
model more accurately and efficiently. This method takes advantage of the
fact that parametric sensitivity of an ODE system can be calculated as a by-product with less computational cost when solving the original system. Furthermore, global sensitivity analysis for monotonic testing can be introduced
in some cases to further reduce the number of samples. The proposed method
has been applied to two engineering applications. The first is hybrid system
verification by reachable set computing/approximation. First, the computational burden of using polyhedron for reachable set approximation is reviewed.
It is then proven that the boundary of a reachable set is formed only by the
trajectories from the boundary of an initial state region. This result reduces
the search space from R
n
to R
n−1
. Finally, the GEMLS method proposed is
integrated with oriented rectangular hull for reachable set representation and
an approximation with improved accuracy and efficiency can be achieved. Another engineering application is model-based fault detection. In this case, a
fault free system is modeled as a parametric uncertain system whose parameters belong to a given bounded set. The performance boundary of a fault free
system can be acquired by using the proposed approach and then employed
as an adaptive threshold. A fault is defined when system parameters do not
belong to the set due to malfunction or degradation. Once such a fault occurs, the monitored system performance will extend beyond the normal system
boundary predicted.Mechanical Engineerin
Development and application of an optimisation architecture with adaptive swarm algorithm for airfoil aerodynamic design
The research focuses on the aerodynamic design of airfoils for a Multi-Mission Unmanned Aerial Vehicle (MM-UAV). Novel shape design processes using evolutionary algorithms (EA) and a surrogate-based management system are developed to address the identified issues and challenges of solution feasibility and computational efficiency associated with present methods. Feasibility refers to the optimality of the converged solution as a function of the defined objectives and constraints. Computational efficiency is a measure of the number of design iterations needed to achieve convergence to the theoretical optimum. Airfoil design problems are characterised by a multi-modal solution topology. Present gradient-based optimisation methods do not converge to an optimal profile, hence solution feasibility is compromised. Population-based optimisation methods including the Genetic Algorithm (GA) have been used in the literature to address this issue. The GA can achieve solution feasibility, yet it is computationally time-intensive, hence efficiency is compromised. Novel EAs are developed to address the identified shortcomings of present methods. A variant to the original Particle Swarm Optimisation algorithm (PSO) is presented. Novel mutation operators are implemented which facilitate the transition of the search particles toward a global solution. The methodology addresses the limited search performance of the original PSO algorithm for multi-modal problems, while maintaining acceptable computational efficiency for aerodynamic design applications. Demonstration of the developed principles confirmed the merits of the proposed design approach. Airfoil optimisation for a low-speed flight profile achieved drag performance improvement that is lower than a off-the-shelf shape designed for the intent role. Acceptable computational efficiency is achieved by restricting the optimisation phase to promising solution regions through the development of a novel, design variable search space mapping structure. The merit of the optimisation framework is further confirmed by transonic airfoil design for high-speed missions. The wave drag of the established optima is lower than the identified, off-the-shelf benchmark. Concurrently significant computational time-savings are achieved relative to the design methodologies present in the literature. A novel surrogate-assisted optimisation framework by the definition of an Artificial Neural Network with a pattern recognition model is developed to further improve the computational efficiency. This has the potential of enhancing the aerodynamic shape design process. The measure of computational efficiency is critical in the development of an optimisation algorithm. Airfoil design simulations presented required 80\% fewer design iterations to achieve convergence than the GA. Computational time-savings spanning days was achieved by the innovative algorithms developed relative to the GA. Hence, computational efficiency of the developed processes is confirmed. Aircraft shape design simulations involve three-dimensional configurations which require excessive computational effort due to the use of high-fidelity solvers for flow analysis in the optimisation process. It is anticipated that the confirmed computational efficiency performance of the design structure presented on two-dimensional cases will be transferable to three-dimensional shape design problems. It is further expected that the novel principles will be applicable for analysis within a multidisciplinary design structure for the development of a MM-UAV
Methods for generating variates from probability distributions
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Diverse probabilistic results are used in the design of random univariate generators. General methods based on these are classified and relevant theoretical properties derived. This is followed by a comparative review of specific algorithms currently available for continuous and discrete univariate distributions. A need for a Zeta generator is established, and two new methods, based on inversion and rejection with a truncated Pareto envelope respectively are developed and compared. The paucity of algorithms for multivariate generation motivates a classification of general methods, and in particular, a new method involving envelope rejection with a novel target distribution is proposed. A new method for generating first passage times in a Wiener Process is constructed. This is based on the ratio of two random numbers, and its performance is compared to an existing method for generating inverse Gaussian variates. New "hybrid" algorithms for Poisson and Negative Binomial distributions are constructed, using an Alias implementation, together with a Geometric tail procedure. These are shown to be robust, exact and fast for a wide range of parameter values. Significant modifications are made to Atkinson's Poisson generator (PA), and the resulting algorithm shown to be complementary to the hybrid method. A new method for Von Mises generation via a comparison of random numbers follows, and its performance compared to
that of Best and Fisher's Wrapped Cauchy rejection method. Finally new methods are proposed for sampling from distribution tails, using optimally designed Exponential envelopes. Timings are given for Gamma and Normal tails, and in the latter case the performance is shown to be significantly better than Marsaglia's tail generation procedure.Governors of Dundee College of Technolog
Modal Intervals Revisited Part 1: A Generalized Interval Natural Extension
The modal intervals theory is an extension of the classical intervals theory which provides richer interpretations (including in particular inner and outer approximations of the ranges of real functions). In spite of its promising potential, the modal intervals theory is not widely used today because of its original and complicated construction. The present paper proposes a new formulation of the modal intervals theory. New extensions of continuous real functions to generalized intervals (intervals whose bounds are not constrained to be ordered) are defined. They are called AE-extensions. These AE-extensions provide the same interpretations as the ones provided by the modal intervals theory, thus enhancing the interpretation of the classical interval extensions. The construction of AE-extensions follows the model of the classical intervals theory: starting from a generalization of the definition of the extensions to classical intervals, the minimal AE-extensions of the elementary operations are first built leading to a generalized interval arithmetic. This arithmetic is proved to coincide with the well known Kaucher arithmetic. Then the natural AE-extensions are constructed similarly to the classical natural extensions. The natural AE-extensions represent an important simplification of the formulation of the four "theorems of and interpretation of a modal rational extension" and "theorems of coercion to and interpretability" of the modal intervals theory. With a construction similar to the classical intervals theory, the new formulation of the modal intervals theory proposed in this paper should facilitate the understanding of the underlying mechanisms, the addition of new items to the theory (e.g. new extensions) and its utilization. In particular, a new mean-value extension to generalized intervals will be introduced in the second part of this paper
Glottal-synchronous speech processing
Glottal-synchronous speech processing is a field of speech science where the pseudoperiodicity
of voiced speech is exploited. Traditionally, speech processing involves segmenting
and processing short speech frames of predefined length; this may fail to exploit the inherent
periodic structure of voiced speech which glottal-synchronous speech frames have
the potential to harness. Glottal-synchronous frames are often derived from the glottal
closure instants (GCIs) and glottal opening instants (GOIs).
The SIGMA algorithm was developed for the detection of GCIs and GOIs from
the Electroglottograph signal with a measured accuracy of up to 99.59%. For GCI and
GOI detection from speech signals, the YAGA algorithm provides a measured accuracy
of up to 99.84%. Multichannel speech-based approaches are shown to be more robust to
reverberation than single-channel algorithms.
The GCIs are applied to real-world applications including speech dereverberation,
where SNR is improved by up to 5 dB, and to prosodic manipulation where the importance
of voicing detection in glottal-synchronous algorithms is demonstrated by subjective
testing. The GCIs are further exploited in a new area of data-driven speech modelling,
providing new insights into speech production and a set of tools to aid deployment into
real-world applications. The technique is shown to be applicable in areas of speech coding,
identification and artificial bandwidth extension of telephone speec
- …