20 research outputs found

    A Similarity Measure Based on Bidirectional Subsethood for Intervals

    Get PDF
    With a growing number of areas leveraging interval-valued data—including in the context of modelling human uncertainty (e.g., in Cyber Security), the capacity to accurately and systematically compare intervals for reasoning and computation is increasingly important. In practice, well established set-theoretic similarity measures such as the Jaccard and Sørensen-Dice measures are commonly used, while axiomatically a wide breadth of possible measures have been theoretically explored. This paper identifies, articulates, and addresses an inherent and so far not discussed limitation of popular measures—their tendency to be subject to aliasing—where they return the same similarity value for very different sets of intervals. The latter risks counter-intuitive results and poor automated reasoning in real-world applications dependent on systematically comparing interval-valued system variables or states. Given this, we introduce new axioms establishing desirable properties for robust similarity measures, followed by putting forward a novel set-theoretic similarity measure based on the concept of bidirectional subsethood which satisfies both the traditional and new axioms. The proposed measure is designed to be sensitive to the variation in the size of intervals, thus avoiding aliasing. The paper provides a detailed theoretical exploration of the new proposed measure, and systematically demonstrates its behaviour using an extensive set of synthetic and real-world data. Specifically, the measure is shown to return robust outputs that follow intuition—essential for real world applications. For example, we show that it is bounded above and below by the Jaccard and Sørensen-Dice similarity measures (when the minimum t-norm is used). Finally, we show that a dissimilarity or distance measure, which satisfies the properties of a metric, can easily be derived from the proposed similarity measure

    Subsethood Measures of Spatial Granules

    Full text link
    Subsethood, which is to measure the degree of set inclusion relation, is predominant in fuzzy set theory. This paper introduces some basic concepts of spatial granules, coarse-fine relation, and operations like meet, join, quotient meet and quotient join. All the atomic granules can be hierarchized by set-inclusion relation and all the granules can be hierarchized by coarse-fine relation. Viewing an information system from the micro and the macro perspectives, we can get a micro knowledge space and a micro knowledge space, from which a rough set model and a spatial rough granule model are respectively obtained. The classical rough set model is the special case of the rough set model induced from the micro knowledge space, while the spatial rough granule model will be play a pivotal role in the problem-solving of structures. We discuss twelve axioms of monotone increasing subsethood and twelve corresponding axioms of monotone decreasing supsethood, and generalize subsethood and supsethood to conditional granularity and conditional fineness respectively. We develop five conditional granularity measures and five conditional fineness measures and prove that each conditional granularity or fineness measure satisfies its corresponding twelve axioms although its subsethood or supsethood measure only hold one of the two boundary conditions. We further define five conditional granularity entropies and five conditional fineness entropies respectively, and each entropy only satisfies part of the boundary conditions but all the ten monotone conditions

    Exploring subsethood to determine firing strength in non-singleton fuzzy logic systems

    Get PDF
    Real world environments face a wide range of sources of noise and uncertainty. Thus, the ability to handle various uncertainties, including noise, becomes an indispensable element of automated decision making. Non-Singleton Fuzzy Logic Systems (NSFLSs) have the potential to tackle uncertainty within the design of fuzzy systems. The firing strength has a significant role in the accuracy of FLSs, being based on the interaction of the input and antecedent fuzzy sets. Recent studies have shown that the standard technique for determining firing strengths risks substantial information loss in terms of the interaction of the input and antecedents. Recently, this issue has been addressed through exploration of alternative approaches which employ the centroid of the intersection (cen-NS) and the similarity (sim-NS) between input and antecedent fuzzy sets. This paper identifies potential shortcomings in respect to the previously introduced similarity-based NSFLSs in which firing strength is defined as the similarity between an input FS and an antecedent. To address these shortcomings, this paper explores the potential of the subsethood measure to generate a more suitable firing level (sub-NS) in NSFLSs featuring various noise levels. In the experiment, the basic waiter tipping fuzzy logic system is used to examine the behaviour of sub-NS in comparison with the current approaches. Analysis of the results shows that the sub-NS approach can lead to more stable behaviour in real world applications

    Type-2 Fuzzy Alpha-cuts

    Get PDF
    Systems that utilise type-2 fuzzy sets to handle uncertainty have not been implemented in real world applications unlike the astonishing number of applications involving standard fuzzy sets. The main reason behind this is the complex mathematical nature of type-2 fuzzy sets which is the source of two major problems. On one hand, it is difficult to mathematically manipulate type-2 fuzzy sets, and on the other, the computational cost of processing and performing operations using these sets is very high. Most of the current research carried out on type-2 fuzzy logic concentrates on finding mathematical means to overcome these obstacles. One way of accomplishing the first task is to develop a meaningful mathematical representation of type-2 fuzzy sets that allows functions and operations to be extended from well known mathematical forms to type-2 fuzzy sets. To this end, this thesis presents a novel alpha-cut representation theorem to be this meaningful mathematical representation. It is the decomposition of a type-2 fuzzy set in to a number of classical sets. The alpha-cut representation theorem is the main contribution of this thesis. This dissertation also presents a methodology to allow functions and operations to be extended directly from classical sets to type-2 fuzzy sets. A novel alpha-cut extension principle is presented in this thesis and used to define uncertainty measures and arithmetic operations for type-2 fuzzy sets. Throughout this investigation, a plethora of concepts and definitions have been developed for the first time in order to make the manipulation of type-2 fuzzy sets a simple and straight forward task. Worked examples are used to demonstrate the usefulness of these theorems and methods. Finally, the crisp alpha-cuts of this fundamental decomposition theorem are by definition independent of each other. This dissertation shows that operations on type-2 fuzzy sets using the alpha-cut extension principle can be processed in parallel. This feature is found to be extremely powerful, especially if performing computation on the massively parallel graphical processing units. This thesis explores this capability and shows through different experiments the achievement of significant reduction in processing time.The National Training Directorate, Republic of Suda

    Data granulation by the principles of uncertainty

    Full text link
    Researches in granular modeling produced a variety of mathematical models, such as intervals, (higher-order) fuzzy sets, rough sets, and shadowed sets, which are all suitable to characterize the so-called information granules. Modeling of the input data uncertainty is recognized as a crucial aspect in information granulation. Moreover, the uncertainty is a well-studied concept in many mathematical settings, such as those of probability theory, fuzzy set theory, and possibility theory. This fact suggests that an appropriate quantification of the uncertainty expressed by the information granule model could be used to define an invariant property, to be exploited in practical situations of information granulation. In this perspective, a procedure of information granulation is effective if the uncertainty conveyed by the synthesized information granule is in a monotonically increasing relation with the uncertainty of the input data. In this paper, we present a data granulation framework that elaborates over the principles of uncertainty introduced by Klir. Being the uncertainty a mesoscopic descriptor of systems and data, it is possible to apply such principles regardless of the input data type and the specific mathematical setting adopted for the information granules. The proposed framework is conceived (i) to offer a guideline for the synthesis of information granules and (ii) to build a groundwork to compare and quantitatively judge over different data granulation procedures. To provide a suitable case study, we introduce a new data granulation technique based on the minimum sum of distances, which is designed to generate type-2 fuzzy sets. We analyze the procedure by performing different experiments on two distinct data types: feature vectors and labeled graphs. Results show that the uncertainty of the input data is suitably conveyed by the generated type-2 fuzzy set models.Comment: 16 pages, 9 figures, 52 reference

    Workshop on Fuzzy Control Systems and Space Station Applications

    Get PDF
    The Workshop on Fuzzy Control Systems and Space Station Applications was held on 14-15 Nov. 1990. The workshop was co-sponsored by McDonnell Douglas Space Systems Company and NASA Ames Research Center. Proceedings of the workshop are presented

    Implication functions in interval-valued fuzzy set theory

    Get PDF
    Interval-valued fuzzy set theory is an extension of fuzzy set theory in which the real, but unknown, membership degree is approximated by a closed interval of possible membership degrees. Since implications on the unit interval play an important role in fuzzy set theory, several authors have extended this notion to interval-valued fuzzy set theory. This chapter gives an overview of the results pertaining to implications in interval-valued fuzzy set theory. In particular, we describe several possibilities to represent such implications using implications on the unit interval, we give a characterization of the implications in interval-valued fuzzy set theory which satisfy the Smets-Magrez axioms, we discuss the solutions of a particular distributivity equation involving strict t-norms, we extend monoidal logic to the interval-valued fuzzy case and we give a soundness and completeness theorem which is similar to the one existing for monoidal logic, and finally we discuss some other constructions of implications in interval-valued fuzzy set theory

    Combining rough and fuzzy sets for feature selection

    Get PDF

    Towards Better Performance in the Face of Input Uncertainty while Maintaining Interpretability in AI

    Get PDF
    Uncertainty is a pervasive element of many real-world applications and very often existing sources of uncertainty (e.g. atmospheric conditions, economic parameters or precision of measurement devices) have a detrimental impact on the input and ultimately results of decision-support systems. Thus, the ability to handle input uncertainty is a valuable component of real-world decision-support systems. There is a vast amount of literature on handling of uncertainty through decision-support systems. While they handle uncertainty and deliver a good performance, providing an insight into the decision process (e.g. why or how results are produced) is another important asset in terms of having trust in or providing a ‘debugging’ process in given decisions. Fuzzy set theory provides the basis for Fuzzy Logic Systems which are often associated with the ability for handling uncertainty and possessing mechanisms for providing a degree of interpretability. Specifically, Non-Singleton Fuzzy Logic Systems are essential in dealing with uncertainty that affects input which is one of the main sources of uncertainty in real-world systems. Therefore, in this thesis, we comprehensively explore enhancing non-singleton fuzzy logic systems capabilities considering both capturing-handling uncertainty and also maintaining interpretability. To that end the following three key aspects are investigated; (i) to faithfully map input uncertainty to outputs of systems, (ii) to propose a new framework to provide the ability for dynamically adapting system on-the-fly in changing real-world environments. (iii) to maintain level of interpretability while leveraging performance of systems. The first aspect is to leverage mapping uncertainty from input to outputs of systems through the interaction between input and antecedent fuzzy sets i.e. firing strengths. In the context of Non-Singleton Fuzzy Logic Systems, recent studies have shown that the standard technique for determining firing strengths risks information loss in terms of the interaction of the input uncertainty and antecedent fuzzy sets. This thesis explores and puts forward novel approaches to generating firing strengths which faithfully map the uncertainty affecting system inputs to outputs. Time-series forecasting experiments are used to evaluate the proposed alternative firing strength generating technique under different levels of input uncertainty. The analysis of the results shows that the proposed approach can also be a suitable method to generate appropriate firing levels which provide the ability to map different uncertainty levels from input to output of FLS that are likely to occur in real-world circumstances. The second aspect is to provide dynamic adaptive behaviours to systems at run-time in changing conditions which are common in real-world environments. Traditionally, in the fuzzification step of Non-Singleton Fuzzy Logic Systems, approaches are generally limited to the selection of a single type of input fuzzy sets to capture the input uncertainty, whereas input uncertainty levels tend to be inherently varying over time in the real-world at run-time. Thus, in this thesis, input uncertainty is modelled -where it specifically arises- in an online manner which can provide an adaptive behaviour to capture varying input uncertainty levels. The framework is presented to generate Type-1 or Interval Type-2 input fuzzy sets, called ADaptive Online Non-singleton fuzzy logic System (ADONiS). In the proposed framework, an uncertainty estimation technique is utilised on a sequence of observations to continuously update the input fuzzy sets of non-singleton fuzzy logic systems. Both the type-1 and interval type-2 versions of the ADONiS frameworks remove the limitation of the selection of a specific type of input fuzzy sets. Also this framework enables input fuzzy sets to be adapted to unknown uncertainty levels which is not perceived at the design stage of the model. Time-series forecasting experiments are implemented and results show that our proposed framework provides performance advantages over traditional counterpart approaches, particularly in environments that include high variation in noise levels, which are common in real-world applications. In addition, the real-world medical application study is designed to test the deployability of the ADONiS framework and to provide initial insight in respect to its viability in replacing traditional approaches. The third aspect is to maintain levels of interpretability, while increasing performance of systems. When a decision-support model delivers a good performance, providing an insight of the decision process is also an important asset in terms of trustworthiness, safety and ethical aspects etc. Fuzzy logic systems are considered to possess mechanisms which can provide a degree of interpretability. Traditionally, while optimisation procedures provide performance benefits in fuzzy logic systems, they often cause alterations in components (e.g. rule set, parameters, or fuzzy partitioning structures) which can lead to higher accuracy but commonly do not consider the interpretability of the resulting model. In this thesis, the state of the art in fuzzy logic systems interpretability is advanced by capturing input uncertainty in the fuzzification -where it arises- and by handling it the inference engine step. In doing so, while the performance increase is achieved, the proposed methods limit any optimisation impact to the fuzzification and inference engine steps which protects key components of FLSs (e.g. fuzzy sets, rule parameters etc.) and provide the ability to maintain the given level of interpretability
    corecore