52,302 research outputs found
Variance-constrained multiobjective control and filtering for nonlinear stochastic systems: A survey
The multiobjective control and filtering problems for nonlinear stochastic systems with variance constraints are surveyed. First, the concepts of nonlinear stochastic systems are recalled along with the introduction of some recent advances. Then, the covariance control theory, which serves as a practical method for multi-objective control design as well as a foundation for linear system theory, is reviewed comprehensively. The multiple design requirements frequently applied in engineering practice for the use of evaluating system performances are introduced, including robustness, reliability, and dissipativity. Several design techniques suitable for the multi-objective variance-constrained control and filtering problems for nonlinear stochastic systems are discussed. In particular, as a special case for the multi-objective design problems, the mixed H 2 / H ∞ control and filtering problems are reviewed in great detail. Subsequently, some latest results on the variance-constrained multi-objective control and filtering problems for the nonlinear stochastic systems are summarized. Finally, conclusions are drawn, and several possible future research directions are pointed out
Fuzzy-logic-based control, filtering, and fault detection for networked systems: A Survey
This paper is concerned with the overview of the recent progress in fuzzy-logic-based filtering, control, and fault detection problems. First, the network technologies are introduced, the networked control systems are categorized from the aspects of fieldbuses and industrial Ethernets, the necessity of utilizing the fuzzy logic is justified, and the network-induced phenomena are discussed. Then, the fuzzy logic control strategies are reviewed in great detail. Special attention is given to the thorough examination on the latest results for fuzzy PID control, fuzzy adaptive control, and fuzzy tracking control problems. Furthermore, recent advances
on the fuzzy-logic-based filtering and fault detection problems are reviewed. Finally, conclusions are given and some possible future research directions are pointed out, for example, topics on two-dimensional networked systems, wireless networked control systems, Quality-of-Service (QoS) of networked systems, and fuzzy access control in open networked systems.This work was supported in part by the National Natural Science Foundation of China under Grants 61329301,
61374039, 61473163, and 61374127, the Hujiang Foundation of China under Grants C14002 andD15009, the Engineering and Physical Sciences Research Council (EPSRC) of the UK, the Royal Society of the UK, and the Alexander von Humboldt Foundation of Germany
Techniques for the Fast Simulation of Models of Highly dependable Systems
With the ever-increasing complexity and requirements of highly dependable systems, their evaluation during design and operation is becoming more crucial. Realistic models of such systems are often not amenable to analysis using conventional analytic or numerical methods. Therefore, analysts and designers turn to simulation to evaluate these models. However, accurate estimation of dependability measures of these models requires that the simulation frequently observes system failures, which are rare events in highly dependable systems. This renders ordinary Simulation impractical for evaluating such systems. To overcome this problem, simulation techniques based on importance sampling have been developed, and are very effective in certain settings. When importance sampling works well, simulation run lengths can be reduced by several orders of magnitude when estimating transient as well as steady-state dependability measures. This paper reviews some of the importance-sampling techniques that have been developed in recent years to estimate dependability measures efficiently in Markov and nonMarkov models of highly dependable system
Recommended from our members
Evaluation of software dependability
It has been said that the term software engineering is an aspiration not a description. We would like to be able to claim that we engineer software, in the same sense that we engineer an aero-engine, but most of us would agree that this is not currently an accurate description of our activities. My suspicion is that it never will be.
From the point of view of this essay – i.e. dependability evaluation – a major difference between software and other engineering artefacts is that the former is pure design. Its unreliability is always the result of design faults, which in turn arise as a result of human intellectual failures. The unreliability of hardware systems, on the other hand, has tended until recently to be dominated by random physical failures of components – the consequences of the ‘perversity of nature’. Reliability theories have been developed over the years which have successfully allowed systems to be built to high reliability requirements, and the final system reliability to be evaluated accurately. Even for pure hardware systems, without software, however, the very success of these theories has more recently highlighted the importance of design faults in determining the overall reliability of the final product. The conventional hardware reliability theory does not address this problem at all.
In the case of software, there is no physical source of failures, and so none of the reliability theory developed for hardware is relevant. We need new theories that will allow us to achieve required dependability levels, and to evaluate the actual dependability that has been achieved, when the sources of the faults that ultimately result in failure are human intellectual failures
Design of engineering systems in Polish mines in the third quarter of the 20th century
Participation of mathematicians in the implementation of economic projects in
Poland, in which mathematics-based methods played an important role, happened
sporadically in the past. Usually methods known from publications and verified
were adapted to solving related problems. The subject of this paper is the
cooperation between mathematicians and engineers in Wroc{\l}aw in the second
half of the twentieth century established in the form of an analysis of the
effectiveness of engineering systems used in mining. The results of this
cooperation showed that at the design stage of technical systems it is
necessary to take into account factors that could not have been rationally
controlled before. The need to explain various aspects of future exploitation
was a strong motivation for the development of mathematical modeling methods.
These methods also opened research topics in the theory of stochastic processes
and graph theory. The social aspects of this cooperation are also interesting.Comment: 45 pages, 11 figures, 116 reference
Polynomial-Chaos-based Kriging
Computer simulation has become the standard tool in many engineering fields
for designing and optimizing systems, as well as for assessing their
reliability. To cope with demanding analysis such as optimization and
reliability, surrogate models (a.k.a meta-models) have been increasingly
investigated in the last decade. Polynomial Chaos Expansions (PCE) and Kriging
are two popular non-intrusive meta-modelling techniques. PCE surrogates the
computational model with a series of orthonormal polynomials in the input
variables where polynomials are chosen in coherency with the probability
distributions of those input variables. On the other hand, Kriging assumes that
the computer model behaves as a realization of a Gaussian random process whose
parameters are estimated from the available computer runs, i.e. input vectors
and response values. These two techniques have been developed more or less in
parallel so far with little interaction between the researchers in the two
fields. In this paper, PC-Kriging is derived as a new non-intrusive
meta-modeling approach combining PCE and Kriging. A sparse set of orthonormal
polynomials (PCE) approximates the global behavior of the computational model
whereas Kriging manages the local variability of the model output. An adaptive
algorithm similar to the least angle regression algorithm determines the
optimal sparse set of polynomials. PC-Kriging is validated on various benchmark
analytical functions which are easy to sample for reference results. From the
numerical investigations it is concluded that PC-Kriging performs better than
or at least as good as the two distinct meta-modeling techniques. A larger gain
in accuracy is obtained when the experimental design has a limited size, which
is an asset when dealing with demanding computational models
- …