10,542 research outputs found
The use of fuzzy control methods for evaluation of complex systems on the example of maritime fleet equipment
At present, the interest in application of synchronous machines in the various systems of the electric drive and energy sources is still growing. Synchronous motors and their modifications enable to develop low-noise, reliable and economically efficient electric drive systems. They provide high maneuverability when using a propeller power plant of the submersible vehicles and the World fleet vessels. Synchronous generators are the major energy sources in the electric power systems of the variety autonomous plants: on vessels, offshore and coastal oil rigs, etc
Genetic algorithms
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology
The connection between distortion risk measures and ordered weighted averaging operators
Distortion risk measures summarize the risk of a loss distribution by means of a single value. In fuzzy systems, the Ordered Weighted Averaging (OWA) and Weighted Ordered Weighted Averaging (WOWA) operators are used to aggregate a large number of fuzzy rules into a single value. We show that these concepts can be derived from the Choquet integral, and then the mathematical relationship between distortion risk measures and the OWA and WOWA operators for discrete and finite random variables is presented. This connection offers a new interpretation of distortion risk measures and, in particular, Value-at-Risk and Tail Value-at-Risk can be understood from an aggregation operator perspective. The theoretical results are illustrated in an example and the degree of orness concept is discussed
Recommended from our members
To split or not to split: Capital allocation with convex risk measures
Convex risk measures were introduced by Deprez and Gerber (1985). Here the problem of allocating risk capital to subportfolios is addressed, when aggregate capital is calculated by a convex risk measure. The Aumann-Shapley value is proposed as an appropriate allocation mechanism. Distortion-exponential measures are discussed extensively and explicit capital allocation formulas are obtained for the case that the risk measure belongs to this family. Finally the implications of capital allocation with a convex risk measure for the stability of portfolios are discussed
Fuzzy measures and integrals in re-identification problems
In this paper we give an overview of our approach of using aggregation operators, and more specifically, fuzzy integrals for solving re-identification problems. We show that the use of Choquet integrals are suitable for some kind of problems.Postprint (author’s final draft
A Manifesto for the Equifinality Thesis.
This essay discusses some of the issues involved in the identification and predictions of hydrological models given some calibration data. The reasons for the incompleteness of traditional calibration methods are discussed. The argument is made that the potential for multiple acceptable models as representations of hydrological and other environmental systems (the equifinality thesis) should be given more serious consideration than hitherto. It proposes some techniques for an extended GLUE methodology to make it more rigorous and outlines some of the research issues still to be resolved
Data Editing for Neuro-Fuzzy Classifiers
In this paper we investigate the potential benefits and
limitations of various data editing procedures when
constructing neuro-fuzzy classifiers based on hyperbox
fuzzy sets. There are two major aspects of data editing
which we are attempting to exploit: a) removal of outliers
and noisy data; and b) reduction of training data size. We
show that successful training data editing can result in
constructing simpler classifiers (i.e. a classifier with a
smaller number and larger hyperboxes) with better
generalisation performance. However we also indicate
the potential dangers of overediting which can lead to
dropping the whole regions of a class and constructing
too simple classifiers not able to capture the class
boundaries with high enough accuracy. A more flexible
approach than the existing data editing techniques based
on estimating probabilities used to decide whether a
point should be removed from the training set has been
proposed. An analysis and graphical interpretations are
given for the synthetic, non-trivial, 2-dimensional
classification problems
- …