84 research outputs found

    A Similarity Measure Based on Bidirectional Subsethood for Intervals

    Get PDF
    With a growing number of areas leveraging interval-valued data—including in the context of modelling human uncertainty (e.g., in Cyber Security), the capacity to accurately and systematically compare intervals for reasoning and computation is increasingly important. In practice, well established set-theoretic similarity measures such as the Jaccard and Sørensen-Dice measures are commonly used, while axiomatically a wide breadth of possible measures have been theoretically explored. This paper identifies, articulates, and addresses an inherent and so far not discussed limitation of popular measures—their tendency to be subject to aliasing—where they return the same similarity value for very different sets of intervals. The latter risks counter-intuitive results and poor automated reasoning in real-world applications dependent on systematically comparing interval-valued system variables or states. Given this, we introduce new axioms establishing desirable properties for robust similarity measures, followed by putting forward a novel set-theoretic similarity measure based on the concept of bidirectional subsethood which satisfies both the traditional and new axioms. The proposed measure is designed to be sensitive to the variation in the size of intervals, thus avoiding aliasing. The paper provides a detailed theoretical exploration of the new proposed measure, and systematically demonstrates its behaviour using an extensive set of synthetic and real-world data. Specifically, the measure is shown to return robust outputs that follow intuition—essential for real world applications. For example, we show that it is bounded above and below by the Jaccard and Sørensen-Dice similarity measures (when the minimum t-norm is used). Finally, we show that a dissimilarity or distance measure, which satisfies the properties of a metric, can easily be derived from the proposed similarity measure

    On the Choice of Similarity Measures for Type-2 Fuzzy Sets

    Get PDF
    Similarity measures are among the most common methods of comparing type-2 fuzzy sets and have been used in numerous applications. However, deciding how to measure similarity and choosing which existing measure to use can be difficult. Whilst some measures give results that highly correlate with each other, others give considerably different results. We evaluate all of the current similarity measures on type-2 fuzzy sets to discover which measures have common properties of similarity and, for those that do not, we discuss why the properties are different, demonstrate whether and what effect this has in applications, and discuss how a measure can avoid missing a property that is required. We analyse existing measures in the context of computing with words using a comprehensive collection of data-driven fuzzy sets. Specifically, we highlight and demonstrate how each method performs at clustering words of similar meaning

    Workshop on Fuzzy Control Systems and Space Station Applications

    Get PDF
    The Workshop on Fuzzy Control Systems and Space Station Applications was held on 14-15 Nov. 1990. The workshop was co-sponsored by McDonnell Douglas Space Systems Company and NASA Ames Research Center. Proceedings of the workshop are presented

    Type-2 Fuzzy Alpha-cuts

    Get PDF
    Systems that utilise type-2 fuzzy sets to handle uncertainty have not been implemented in real world applications unlike the astonishing number of applications involving standard fuzzy sets. The main reason behind this is the complex mathematical nature of type-2 fuzzy sets which is the source of two major problems. On one hand, it is difficult to mathematically manipulate type-2 fuzzy sets, and on the other, the computational cost of processing and performing operations using these sets is very high. Most of the current research carried out on type-2 fuzzy logic concentrates on finding mathematical means to overcome these obstacles. One way of accomplishing the first task is to develop a meaningful mathematical representation of type-2 fuzzy sets that allows functions and operations to be extended from well known mathematical forms to type-2 fuzzy sets. To this end, this thesis presents a novel alpha-cut representation theorem to be this meaningful mathematical representation. It is the decomposition of a type-2 fuzzy set in to a number of classical sets. The alpha-cut representation theorem is the main contribution of this thesis. This dissertation also presents a methodology to allow functions and operations to be extended directly from classical sets to type-2 fuzzy sets. A novel alpha-cut extension principle is presented in this thesis and used to define uncertainty measures and arithmetic operations for type-2 fuzzy sets. Throughout this investigation, a plethora of concepts and definitions have been developed for the first time in order to make the manipulation of type-2 fuzzy sets a simple and straight forward task. Worked examples are used to demonstrate the usefulness of these theorems and methods. Finally, the crisp alpha-cuts of this fundamental decomposition theorem are by definition independent of each other. This dissertation shows that operations on type-2 fuzzy sets using the alpha-cut extension principle can be processed in parallel. This feature is found to be extremely powerful, especially if performing computation on the massively parallel graphical processing units. This thesis explores this capability and shows through different experiments the achievement of significant reduction in processing time.The National Training Directorate, Republic of Suda

    Neuro-Fuzzy Classifiers/Quantifiers for E-Nose Applications

    Get PDF

    An effective similarity measurement under epistemic uncertainty

    Get PDF
    The epistemic uncertainty stems from the lack of knowledge and it can be reduced when the knowledge increases. Such inter-pretation works well with data represented as a set of possible states and therefore, multivalued similarity measures. Unfortunately, set-valued extensions of similarity measures are not computationally feasible even when the data is finite. Measures with properties that allow efficient calculation of their extensions, need to be found. Analysis of various similarity measures indicated logic-based (additive) measures as an excellent candidate. Their unique properties are discussed and efficient algorithms for computing set-valued extensions are given. The work presents results related to various classes of fuzzy set families: general ones, intervals of fuzzy sets, and their finite sums. The first case is related to the concept of the Fuzzy Membership Function Family, the second corresponds to the Interval-Valued Fuzzy Sets, while the third class is equivalent to the concept of Typical Interval-Valued Hesitant Fuzzy Sets

    Fuzzy-Rough Sets Assisted Attribute Selection

    Get PDF
    Attribute selection (AS) refers to the problem of selecting those input attributes or features that are most predictive of a given outcome; a problem encountered in many areas such as machine learning, pattern recognition and signal processing. Unlike other dimensionality reduction methods, attribute selectors preserve the original meaning of the attributes after reduction. This has found application in tasks that involve datasets containing huge numbers of attributes (in the order of tens of thousands) which, for some learning algorithms, might be impossible to process further. Recent examples include text processing and web content classification. AS techniques have also been applied to small and medium-sized datasets in order to locate the most informative attributes for later use. One of the many successful applications of rough set theory has been to this area. The rough set ideology of using only the supplied data and no other information has many benefits in AS, where most other methods require supplementary knowledge. However, the main limitation of rough set-based attribute selection in the literature is the restrictive requirement that all data is discrete. In classical rough set theory, it is not possible to consider real-valued or noisy data. This paper investigates a novel approach based on fuzzy-rough sets, fuzzy rough feature selection (FRFS), that addresses these problems and retains dataset semantics. FRFS is applied to two challenging domains where a feature reducing step is important; namely, web content classification and complex systems monitoring. The utility of this approach is demonstrated and is compared empirically with several dimensionality reducers. In the experimental studies, FRFS is shown to equal or improve classification accuracy when compared to the results from unreduced data. Classifiers that use a lower dimensional set of attributes which are retained by fuzzy-rough reduction outperform those that employ more attributes returned by the existing crisp rough reduction method. In addition, it is shown that FRFS is more powerful than the other AS techniques in the comparative study

    The f -index of inclusion as optimal adjoint pair for fuzzy modus ponens

    Get PDF
    We continue studying the properties of the f -index of inclusion and show that, given a fixed pair of fuzzy sets, their f -index of inclusion can be linked to a fuzzy conjunction which is part of an adjoint pair. We also show that, when this pair is used as the underlying structure to provide a fuzzy interpretation of the modus ponens inference rule, it provides the maximum possible truth-value in the conclusion among all those values obtained by fuzzy modus ponens using any other possible adjoint pair.Partially supported by the Spanish Ministry of Science, Innovation and Universities (MCIU), State Agency of Research (AEI), Junta de Andalucía (JA), Universidad de Málaga (UMA) and European Regional Development Fund (FEDER) through the projects PGC2018-095869-B-I00 (MCIU/AEI/FEDER) and UMA2018-FEDERJA-001 (JA/UMA/FEDER). Funding for open access charge: Universidad de Málaga / CBU

    Towards Better Performance in the Face of Input Uncertainty while Maintaining Interpretability in AI

    Get PDF
    Uncertainty is a pervasive element of many real-world applications and very often existing sources of uncertainty (e.g. atmospheric conditions, economic parameters or precision of measurement devices) have a detrimental impact on the input and ultimately results of decision-support systems. Thus, the ability to handle input uncertainty is a valuable component of real-world decision-support systems. There is a vast amount of literature on handling of uncertainty through decision-support systems. While they handle uncertainty and deliver a good performance, providing an insight into the decision process (e.g. why or how results are produced) is another important asset in terms of having trust in or providing a ‘debugging’ process in given decisions. Fuzzy set theory provides the basis for Fuzzy Logic Systems which are often associated with the ability for handling uncertainty and possessing mechanisms for providing a degree of interpretability. Specifically, Non-Singleton Fuzzy Logic Systems are essential in dealing with uncertainty that affects input which is one of the main sources of uncertainty in real-world systems. Therefore, in this thesis, we comprehensively explore enhancing non-singleton fuzzy logic systems capabilities considering both capturing-handling uncertainty and also maintaining interpretability. To that end the following three key aspects are investigated; (i) to faithfully map input uncertainty to outputs of systems, (ii) to propose a new framework to provide the ability for dynamically adapting system on-the-fly in changing real-world environments. (iii) to maintain level of interpretability while leveraging performance of systems. The first aspect is to leverage mapping uncertainty from input to outputs of systems through the interaction between input and antecedent fuzzy sets i.e. firing strengths. In the context of Non-Singleton Fuzzy Logic Systems, recent studies have shown that the standard technique for determining firing strengths risks information loss in terms of the interaction of the input uncertainty and antecedent fuzzy sets. This thesis explores and puts forward novel approaches to generating firing strengths which faithfully map the uncertainty affecting system inputs to outputs. Time-series forecasting experiments are used to evaluate the proposed alternative firing strength generating technique under different levels of input uncertainty. The analysis of the results shows that the proposed approach can also be a suitable method to generate appropriate firing levels which provide the ability to map different uncertainty levels from input to output of FLS that are likely to occur in real-world circumstances. The second aspect is to provide dynamic adaptive behaviours to systems at run-time in changing conditions which are common in real-world environments. Traditionally, in the fuzzification step of Non-Singleton Fuzzy Logic Systems, approaches are generally limited to the selection of a single type of input fuzzy sets to capture the input uncertainty, whereas input uncertainty levels tend to be inherently varying over time in the real-world at run-time. Thus, in this thesis, input uncertainty is modelled -where it specifically arises- in an online manner which can provide an adaptive behaviour to capture varying input uncertainty levels. The framework is presented to generate Type-1 or Interval Type-2 input fuzzy sets, called ADaptive Online Non-singleton fuzzy logic System (ADONiS). In the proposed framework, an uncertainty estimation technique is utilised on a sequence of observations to continuously update the input fuzzy sets of non-singleton fuzzy logic systems. Both the type-1 and interval type-2 versions of the ADONiS frameworks remove the limitation of the selection of a specific type of input fuzzy sets. Also this framework enables input fuzzy sets to be adapted to unknown uncertainty levels which is not perceived at the design stage of the model. Time-series forecasting experiments are implemented and results show that our proposed framework provides performance advantages over traditional counterpart approaches, particularly in environments that include high variation in noise levels, which are common in real-world applications. In addition, the real-world medical application study is designed to test the deployability of the ADONiS framework and to provide initial insight in respect to its viability in replacing traditional approaches. The third aspect is to maintain levels of interpretability, while increasing performance of systems. When a decision-support model delivers a good performance, providing an insight of the decision process is also an important asset in terms of trustworthiness, safety and ethical aspects etc. Fuzzy logic systems are considered to possess mechanisms which can provide a degree of interpretability. Traditionally, while optimisation procedures provide performance benefits in fuzzy logic systems, they often cause alterations in components (e.g. rule set, parameters, or fuzzy partitioning structures) which can lead to higher accuracy but commonly do not consider the interpretability of the resulting model. In this thesis, the state of the art in fuzzy logic systems interpretability is advanced by capturing input uncertainty in the fuzzification -where it arises- and by handling it the inference engine step. In doing so, while the performance increase is achieved, the proposed methods limit any optimisation impact to the fuzzification and inference engine steps which protects key components of FLSs (e.g. fuzzy sets, rule parameters etc.) and provide the ability to maintain the given level of interpretability
    • …
    corecore