335 research outputs found

    An improvised similarity measure for generalized fuzzy numbers

    Get PDF
    Similarity measure between two fuzzy sets is an important tool for comparing various characteristics of the fuzzy sets. It is a preferred approach as compared to distance methods as the defuzzification process in obtaining the distance between fuzzy sets will incur loss of information. Many similarity measures have been introduced but most of them are not capable to discriminate certain type of fuzzy numbers. In this paper, an improvised similarity measure for generalized fuzzy numbers that incorporate several essential features is proposed. The features under consideration are geometric mean averaging, Hausdorff distance, distance between elements, distance between center of gravity and the Jaccard index. The new similarity measure is validated using some benchmark sample sets. The proposed similarity measure is found to be consistent with other existing methods with an advantage of able to solve some discriminant problems that other methods cannot. Analysis of the advantages of the improvised similarity measure is presented and discussed. The proposed similarity measure can be incorporated in decision making procedure with fuzzy environment for ranking purposes

    A New Method for Defuzzification and Ranking of Fuzzy Numbers Based on the Statistical Beta Distribution

    Get PDF
    Granular computing is an emerging computing theory and paradigm that deals with the processing of information granules, which are defined as a number of information entities grouped together due to their similarity, physical adjacency, or indistinguishability. In most aspects of human reasoning, these granules have an uncertain formation, so the concept of granularity of fuzzy information could be of special interest for the applications where fuzzy sets must be converted to crisp sets to avoid uncertainty. This paper proposes a novel method of defuzzification based on the mean value of statistical Beta distribution and an algorithm for ranking fuzzy numbers based on the crisp number ranking system on R. The proposed method is quite easy to use, but the main reason for following this approach is the equality of left spread, right spread, and mode of Beta distribution with their corresponding values in fuzzy numbers within (0,1) interval, in addition to the fact that the resulting method can satisfy all reasonable properties of fuzzy quantity ordering defined by Wang et al. The algorithm is illustrated through several numerical examples and it is then compared with some of the other methods provided by literature

    How to Treat Expert Judgment? With certainty it contains uncertainty!

    Get PDF
    PresentationTo be acceptably safe one must identify the risks one is exposed to. It is uncertain whether the threat really will materialize, but determining the size and probability of the risk is also full of uncertainty. When performing an analysis and preparing for decision making under uncertainty, quite frequently failure rate data, information on consequence severity or on a probability value, yes, even on the possibility an event can or cannot occur is lacking. In those cases, the only way to proceed is to revert to expert judgment. Even in case historical data are available, but one should like to know whether these data still hold in the current situation, an expert can be asked about their reliability. Anyhow, expert elicitation comes with an uncertainty depending on the expert’s reliability, which becomes very visible when two or more experts give different answers or even conflicting ones. This is not a new problem, and very bright minds have thought how to tackle it. But so far, however, the topic has not been given much attention in process safety and risk assessment. The paper has a review character and will present various approaches with detailed explanation and examples

    Evolving Clustering Algorithms And Their Application For Condition Monitoring, Diagnostics, & Prognostics

    Get PDF
    Applications of Condition-Based Maintenance (CBM) technology requires effective yet generic data driven methods capable of carrying out diagnostics and prognostics tasks without detailed domain knowledge and human intervention. Improved system availability, operational safety, and enhanced logistics and supply chain performance could be achieved, with the widespread deployment of CBM, at a lower cost level. This dissertation focuses on the development of a Mutual Information based Recursive Gustafson-Kessel-Like (MIRGKL) clustering algorithm which operates recursively to identify underlying model structure and parameters from stream type data. Inspired by the Evolving Gustafson-Kessel-like Clustering (eGKL) algorithm, we applied the notion of mutual information to the well-known Mahalanobis distance as the governing similarity measure throughout. This is also a special case of the Kullback-Leibler (KL) Divergence where between-cluster shape information (governed by the determinant and trace of the covariance matrix) is omitted and is only applicable in the case of normally distributed data. In the cluster assignment and consolidation process, we proposed the use of the Chi-square statistic with the provision of having different probability thresholds. Due to the symmetry and boundedness property brought in by the mutual information formulation, we have shown with real-world data that the algorithm’s performance becomes less sensitive to the same range of probability thresholds which makes system tuning a simpler task in practice. As a result, improvement demonstrated by the proposed algorithm has implications in improving generic data driven methods for diagnostics, prognostics, generic function approximations and knowledge extractions for stream type of data. The work in this dissertation demonstrates MIRGKL’s effectiveness in clustering and knowledge representation and shows promising results in diagnostics and prognostics applications

    Data granulation by the principles of uncertainty

    Full text link
    Researches in granular modeling produced a variety of mathematical models, such as intervals, (higher-order) fuzzy sets, rough sets, and shadowed sets, which are all suitable to characterize the so-called information granules. Modeling of the input data uncertainty is recognized as a crucial aspect in information granulation. Moreover, the uncertainty is a well-studied concept in many mathematical settings, such as those of probability theory, fuzzy set theory, and possibility theory. This fact suggests that an appropriate quantification of the uncertainty expressed by the information granule model could be used to define an invariant property, to be exploited in practical situations of information granulation. In this perspective, a procedure of information granulation is effective if the uncertainty conveyed by the synthesized information granule is in a monotonically increasing relation with the uncertainty of the input data. In this paper, we present a data granulation framework that elaborates over the principles of uncertainty introduced by Klir. Being the uncertainty a mesoscopic descriptor of systems and data, it is possible to apply such principles regardless of the input data type and the specific mathematical setting adopted for the information granules. The proposed framework is conceived (i) to offer a guideline for the synthesis of information granules and (ii) to build a groundwork to compare and quantitatively judge over different data granulation procedures. To provide a suitable case study, we introduce a new data granulation technique based on the minimum sum of distances, which is designed to generate type-2 fuzzy sets. We analyze the procedure by performing different experiments on two distinct data types: feature vectors and labeled graphs. Results show that the uncertainty of the input data is suitably conveyed by the generated type-2 fuzzy set models.Comment: 16 pages, 9 figures, 52 reference

    Uncertainty and Interpretability Studies in Soft Computing with an Application to Complex Manufacturing Systems

    Get PDF
    In systems modelling and control theory, the benefits of applying neural networks have been extensively studied. Particularly in manufacturing processes, such as the prediction of mechanical properties of heat treated steels. However, modern industrial processes usually involve large amounts of data and a range of non-linear effects and interactions that might hinder their model interpretation. For example, in steel manufacturing the understanding of complex mechanisms that lead to the mechanical properties which are generated by the heat treatment process is vital. This knowledge is not available via numerical models, therefore an experienced metallurgist estimates the model parameters to obtain the required properties. This human knowledge and perception sometimes can be imprecise leading to a kind of cognitive uncertainty such as vagueness and ambiguity when making decisions. In system classification, this may be translated into a system deficiency - for example, small input changes in system attributes may result in a sudden and inappropriate change for class assignation. In order to address this issue, practitioners and researches have developed systems that are functional equivalent to fuzzy systems and neural networks. Such systems provide a morphology that mimics the human ability of reasoning via the qualitative aspects of fuzzy information rather by its quantitative analysis. Furthermore, these models are able to learn from data sets and to describe the associated interactions and non-linearities in the data. However, in a like-manner to neural networks, a neural fuzzy system may suffer from a lost of interpretability and transparency when making decisions. This is mainly due to the application of adaptive approaches for its parameter identification. Since the RBF-NN can be treated as a fuzzy inference engine, this thesis presents several methodologies that quantify different types of uncertainty and its influence on the model interpretability and transparency of the RBF-NN during its parameter identification. Particularly, three kind of uncertainty sources in relation to the RBF-NN are studied, namely: entropy, fuzziness and ambiguity. First, a methodology based on Granular Computing (GrC), neutrosophic sets and the RBF-NN is presented. The objective of this methodology is to quantify the hesitation produced during the granular compression at the low level of interpretability of the RBF-NN via the use of neutrosophic sets. This study also aims to enhance the disitnguishability and hence the transparency of the initial fuzzy partition. The effectiveness of the proposed methodology is tested against a real case study for the prediction of the properties of heat-treated steels. Secondly, a new Interval Type-2 Radial Basis Function Neural Network (IT2-RBF-NN) is introduced as a new modelling framework. The IT2-RBF-NN takes advantage of the functional equivalence between FLSs of type-1 and the RBF-NN so as to construct an Interval Type-2 Fuzzy Logic System (IT2-FLS) that is able to deal with linguistic uncertainty and perceptions in the RBF-NN rule base. This gave raise to different combinations when optimising the IT2-RBF-NN parameters. Finally, a twofold study for uncertainty assessment at the high-level of interpretability of the RBF-NN is provided. On the one hand, the first study proposes a new methodology to quantify the a) fuzziness and the b) ambiguity at each RU, and during the formation of the rule base via the use of neutrosophic sets theory. The aim of this methodology is to calculate the associated fuzziness of each rule and then the ambiguity related to each normalised consequence of the fuzzy rules that result from the overlapping and to the choice with one-to-many decisions respectively. On the other hand, a second study proposes a new methodology to quantify the entropy and the fuzziness that come out from the redundancy phenomenon during the parameter identification. To conclude this work, the experimental results obtained through the application of the proposed methodologies for modelling two well-known benchmark data sets and for the prediction of mechanical properties of heat-treated steels conducted to publication of three articles in two peer-reviewed journals and one international conference

    BENCHMARKING CLASSIFIERS - HOW WELL DOES A GOWA-VARIANT OF THE SIMILARITY CLASSIFIER DO IN COMPARISON WITH SELECTED CLASSIFIERS?

    Get PDF
    Digital data is ubiquitous in nearly all modern businesses. Organizations have more data available, in various formats, than ever before. Machine learning algorithms and predictive analytics utilize the knowledge contained in that data, in order to help the business related decision-making. This study explores predictive analytics by comparing different classification methods – the main interest being in the Generalize Ordered Weighted Average (GOWA)-variant of the similarity classifier. The target for this research is to find out how what is the GOWA-variant of the similarity classifier and how well it performs compared to other selected classifiers. This study also tries to investigate whether the GOWA-variant of the similarity classifier is a sufficient method to be used in the busi-ness related decision-making. Four different classical classifiers were selected as reference classifiers on the basis of their common usage in machine learning research, and on their availability in the Sta-tistics and Machine Learning Toolbox in MATLAB. Three different data sets from UCI Machine Learning repository were used for benchmarking the classifiers. The benchmarking process uses fitness function instead of pure classification accuracy to determine the performance of the classifiers. Fitness function combines several measurement criteria into a one common value. With one data set, the GOWA-variant of the similarity classifier per-formed the best. One of the data sets contains credit card client data. It was more complex than the other two data sets and contains clearly business related data. The GOWA-variant performed also well with this data set. Therefore it can be claimed that the GOWA-variant of the similarity classifi-er is a viable option to be used also for solving business related problems

    A multi-attribute decision making procedure using fuzzy numbers and hybrid aggregators

    Get PDF
    The classical Analytical Hierarchy Process (AHP) has two limitations. Firstly, it disregards the aspect of uncertainty that usually embedded in the data or information expressed by human. Secondly, it ignores the aspect of interdependencies among attributes during aggregation. The application of fuzzy numbers aids in confronting the former issue whereas, the usage of Choquet Integral operator helps in dealing with the later issue. However, the application of fuzzy numbers into multi-attribute decision making (MADM) demands some additional steps and inputs from decision maker(s). Similarly, identification of monotone measure weights prior to employing Choquet Integral requires huge number of computational steps and amount of inputs from decision makers, especially with the increasing number of attributes. Therefore, this research proposed a MADM procedure which able to reduce the number of computational steps and amount of information required from the decision makers when dealing with these two aspects simultaneously. To attain primary goal of this research, five phases were executed. First, the concept of fuzzy set theory and its application in AHP were investigated. Second, an analysis on the aggregation operators was conducted. Third, the investigation was narrowed on Choquet Integral and its associate monotone measure. Subsequently, the proposed procedure was developed with the convergence of five major components namely Factor Analysis, Fuzzy-Linguistic Estimator, Choquet Integral, Mikhailov‘s Fuzzy AHP, and Simple Weighted Average. Finally, the feasibility of the proposed procedure was verified by solving a real MADM problem where the image of three stores located in Sabak Bernam, Selangor, Malaysia was analysed from the homemakers‘ perspective. This research has a potential in motivating more decision makers to simultaneously include uncertainties in human‘s data and interdependencies among attributes when solving any MADM problems
    • …
    corecore