10 research outputs found

    Hierarchical Fuzzy Systems: Interpretability and Complexity

    Get PDF
    Hierarchical fuzzy systems (HFSs) have been regarded as a useful solution for overcoming the major issues in fuzzy logic systems (FLSs), i.e., rule explosion due to the increase in the number of input variables. In HFS, the standard FLS are reformed into a low-dimensional FLS subsystem network. Moreover, the rules in HFS usually have antecedents with fewer variables than the rules in standard FLS with equivalent functions, because the number of input variables in each subsystem is less. Consequently, HFSs manage to decrease rule explosion, which minimises complexity and improves model interpretability. Nevertheless, the issues related to the question of “Does the complexity reduction of HFSs that have multiple subsystems, layers and different topologies really improve their interpretability?” are not clear and persist. In this paper, a comparison focusing on interpretability and complexity is made between two HFS’ topologies: parallel and serial. A detailed measurement of the interpretability and complexity with different configurations for both topologies is provided. This comparative study aims to examine the correlation between interpretability and complexity in HFS

    Fine-tuning the fuzziness of strong fuzzy partitions through PSO

    Get PDF
    We study the influence of fuzziness of trapezoidal fuzzy sets in the strong fuzzy partitions (SFPs) that constitute the database of a fuzzy rule-based classifier. To this end, we develop a particular representation of the trapezoidal fuzzy sets that is based on the concept of cuts, which are the cross-points of fuzzy sets in a SFP and fix the position of the fuzzy sets in the Universe of Discourse. In this way, it is possible to isolate the parameters that characterize the fuzziness of the fuzzy sets, which are subject to fine-tuning through particle swarm optimization (PSO). In this paper, we propose a formulation of the parameter space that enables the exploration of all possible levels of fuzziness in a SFP. The experimental results show that the impact of fuzziness is strongly dependent on the defuzzification procedure used in fuzzy rule-based classifiers. Fuzziness has little influence in the case of winner-takes-all defuzzification, while it is more influential in weighted sum defuzzification, which however may pose some interpretation problems

    THE REAL-WORLD-SEMANTICS INTERPRETABILITY OF LINGUISTIC RULE BASES AND THE APPROXIMATE REASONING METHOD OF FUZZY SYSTEMS

    Get PDF
    The real-world-semantics interpretability concept of fuzzy systems introduced in [1] is new for the both methodology and application and is necessary to meet the demand of establishing a mathematical basis to construct computational semantics of linguistic words so that a method developed based on handling the computational semantics of linguistic terms to simulate a human method immediately handling words can produce outputs similar to the one produced by the human method. As the real world of each application problem having its own structure which is described by certain linguistic expressions, this requirement can be ensured by imposing constraints on the interpretation assigning computational objects in the appropriate computational structure to the words so that the relationships between the computational semantics in the computational structure is the image of relationships between the real-world objects described by the word-expressions. This study will discuss more clearly the concept of real-world-semantics interpretability and point out that such requirement is a challenge to the study of the interpretability of fuzzy systems, especially for approaches within the fuzzy set framework. A methodological challenge is that it requires both the computational expression representing a given linguistic fuzzy rule base and an approximate reasoning method working on this computation expression must also preserve the real-world semantics of the application problem. Fortunately, the hedge algebra (HA) based approach demonstrates the expectation that the graphical representation of the rule of fuzzy systems and the interpolation reasoning method on them are able to preserve the real-world semantics of the real-world counterpart of the given application problem

    Towards a framework for capturing interpretability of hierarchical fuzzy systems - a participatory design approach

    Get PDF
    Hierarchical fuzzy systems (HFSs) have been shown to have the potential to improve the interpretability of fuzzy logic systems (FLSs). However, challenges remain, such as: "How can we measure their interpretability?", "How can we make an informed assessment of how HFSs should be designed to enhance interpretability?". The challenges of measuring the interpretability of HFSs include issues such as their topological structure, the number of layers, the meaning of intermediate variables, and so on. In this paper, an initial framework to measure the interpretability of HFSs is proposed, combined with a participatory user design process to create a specific instance of the framework for an application context. This approach enables the subjective views of a range of practitioners, experts in the design and creation of FLSs, to be taken into account in shaping the design of a generic framework for measuring interpretability in HFSs. This design process and framework are demonstrated through two classification application examples, showing the ability of the resulting index to appropriately capture interpretability as perceived by system design experts

    Design of a fuzzy logic software estimation process

    Get PDF
    This thesis describes the design of a fuzzy logic software estimation process. Studies show that most of the projects finish overbudget or later than the planned end date (Standish Group, 2009) even though the software organizations have attempted to increase the success rate of software projects by making the process more manageable and, consequently, more predictable. Project estimation is an important issue because it is the basis for the allocation and management of the resources associated to a project. When the estimation process is not performed properly, this leads to higher risks in their software projects, and the organizations may end up with losses instead of the expected profits from their funded projects. The most important estimates need to be made right in the very early phases of a project when the information is only available at a very high level of abstraction and, often, is based on a number of assumptions. The approach for estimating software projects in the software industry is the one typically based on the experience of the employees in the organization. There are a number of problems with using experience for estimation purposes: for instance, the way to obtain the estimate is only implicit, i.e. there is no consistent way to derive the estimated value, and the experience is strongly related to the experts, not to the organization. The research goal of this thesis is to design a software estimation process able to manage the lack of detailed and quantitative information embedded in the early phases of the software development life cycle. The research approach aims to leverage the advantages of the experience-based approach that can be used in early phases of software estimation while addressing some of the major problems generated by this estimation approach. The specific research objectives to be met by this improved software estimation process are: A. The proposed estimation process must use relevant techniques to handle uncertainty and ambiguity in order to consider the way practitioners make their estimates: the proposed estimation process must use the variables that the practitioners use. B. The proposed estimation process must be useful in early stages of the software development process. C. The proposed estimation process needs to preserve the experience or knowledge base for the organization: this implies an easy way to define and capture the experience of the experts. D. The proposed model must be usable by people with skills distinct from those of the people who configure the original context of the proposed model. In this thesis, an estimation process based on fuzzy logic is proposed, and is referred as the ‘Estimation of Projects in a Context of Uncertainty - EPCU’. The fuzzy logic approach was adopted for the proposed estimation process because it is a formal way to manage the uncertainty and the linguistic variables observed in the early phases of a project when the estimates need to be obtained: using a fuzzy system allows to capture the experience from the organization’s experts via inference rules and to keep this experience within the organization. The experimentation phase typically presents a big challenge, in software engineering in particular, and more so since the software projects estimates must be done “a priori”: indeed for verification purposes, there is a typically large elapsed time between the initial estimate and the completion of the projects upon which the ‘true’ values of effort, duration and costs can be known with certainty in order to verify whether or not the estimates were the right ones. This thesis includes a number of experiments with data from the software industry in Mexico. These experiments are organized into several scenarios, including one with reestimation of real projects completed in industry, but using – for estimation purposes - only the information that was available at the beginning of these projects. From the experiments results reported in this thesis it can be observed that with the use of the proposed fuzzy-logic based estimation process, estimates for these projects are better than the estimates based on the expert opinion approach. Finally, to handle the large amount of calculations required by the EPCU estimation model, as well as for the recording and the management of the information generated by the EPCU model, a research prototype tool was designed and developed to perform the necessary calculations

    Uncertainty and Interpretability Studies in Soft Computing with an Application to Complex Manufacturing Systems

    Get PDF
    In systems modelling and control theory, the benefits of applying neural networks have been extensively studied. Particularly in manufacturing processes, such as the prediction of mechanical properties of heat treated steels. However, modern industrial processes usually involve large amounts of data and a range of non-linear effects and interactions that might hinder their model interpretation. For example, in steel manufacturing the understanding of complex mechanisms that lead to the mechanical properties which are generated by the heat treatment process is vital. This knowledge is not available via numerical models, therefore an experienced metallurgist estimates the model parameters to obtain the required properties. This human knowledge and perception sometimes can be imprecise leading to a kind of cognitive uncertainty such as vagueness and ambiguity when making decisions. In system classification, this may be translated into a system deficiency - for example, small input changes in system attributes may result in a sudden and inappropriate change for class assignation. In order to address this issue, practitioners and researches have developed systems that are functional equivalent to fuzzy systems and neural networks. Such systems provide a morphology that mimics the human ability of reasoning via the qualitative aspects of fuzzy information rather by its quantitative analysis. Furthermore, these models are able to learn from data sets and to describe the associated interactions and non-linearities in the data. However, in a like-manner to neural networks, a neural fuzzy system may suffer from a lost of interpretability and transparency when making decisions. This is mainly due to the application of adaptive approaches for its parameter identification. Since the RBF-NN can be treated as a fuzzy inference engine, this thesis presents several methodologies that quantify different types of uncertainty and its influence on the model interpretability and transparency of the RBF-NN during its parameter identification. Particularly, three kind of uncertainty sources in relation to the RBF-NN are studied, namely: entropy, fuzziness and ambiguity. First, a methodology based on Granular Computing (GrC), neutrosophic sets and the RBF-NN is presented. The objective of this methodology is to quantify the hesitation produced during the granular compression at the low level of interpretability of the RBF-NN via the use of neutrosophic sets. This study also aims to enhance the disitnguishability and hence the transparency of the initial fuzzy partition. The effectiveness of the proposed methodology is tested against a real case study for the prediction of the properties of heat-treated steels. Secondly, a new Interval Type-2 Radial Basis Function Neural Network (IT2-RBF-NN) is introduced as a new modelling framework. The IT2-RBF-NN takes advantage of the functional equivalence between FLSs of type-1 and the RBF-NN so as to construct an Interval Type-2 Fuzzy Logic System (IT2-FLS) that is able to deal with linguistic uncertainty and perceptions in the RBF-NN rule base. This gave raise to different combinations when optimising the IT2-RBF-NN parameters. Finally, a twofold study for uncertainty assessment at the high-level of interpretability of the RBF-NN is provided. On the one hand, the first study proposes a new methodology to quantify the a) fuzziness and the b) ambiguity at each RU, and during the formation of the rule base via the use of neutrosophic sets theory. The aim of this methodology is to calculate the associated fuzziness of each rule and then the ambiguity related to each normalised consequence of the fuzzy rules that result from the overlapping and to the choice with one-to-many decisions respectively. On the other hand, a second study proposes a new methodology to quantify the entropy and the fuzziness that come out from the redundancy phenomenon during the parameter identification. To conclude this work, the experimental results obtained through the application of the proposed methodologies for modelling two well-known benchmark data sets and for the prediction of mechanical properties of heat-treated steels conducted to publication of three articles in two peer-reviewed journals and one international conference

    IMPROVING UNDERSTANDABILITY AND UNCERTAINTY MODELING OF DATA USING FUZZY LOGIC SYSTEMS

    Get PDF
    The need for automation, optimality and efficiency has made modern day control and monitoring systems extremely complex and data abundant. However, the complexity of the systems and the abundance of raw data has reduced the understandability and interpretability of data which results in a reduced state awareness of the system. Furthermore, different levels of uncertainty introduced by sensors and actuators make interpreting and accurately manipulating systems difficult. Classical mathematical methods lack the capability to capture human knowledge and increase understandability while modeling such uncertainty. Fuzzy Logic has been shown to alleviate both these problems by introducing logic based on vague terms that rely on human understandable terms. The use of linguistic terms and simple consequential rules increase the understandability of system behavior as well as data. Use of vague terms and modeling data from non-discrete prototypes enables modeling of uncertainty. However, due to recent trends, the primary research of fuzzy logic have been diverged from the basic concept of understandability. Furthermore, high computational costs to achieve robust uncertainty modeling have led to restricted use of such fuzzy systems in real-world applications. Thus, the goal of this dissertation is to present algorithms and techniques that improve understandability and uncertainty modeling using Fuzzy Logic Systems. In order to achieve this goal, this dissertation presents the following major contributions: 1) a novel methodology for generating Fuzzy Membership Functions based on understandability, 2) Linguistic Summarization of data using if-then type consequential rules, and 3) novel Shadowed Type-2 Fuzzy Logic Systems for uncertainty modeling. Finally, these presented techniques are applied to real world systems and data to exemplify their relevance and usage

    Análisis FINGRAMS de sistemas difusos basados en reglas bajo premisas de interpretabilidad y precisión

    Get PDF
    El  objetivo  de  este  proyecto  es  crear  un  nuevo  paradigma  para  el  análisis  de  comprensibilidad de sistemas difusos basado, se centra en identificar en base a la  metodología FINGRAMS la selección, o no selección, de reglas difusas provenientes de  un sistema difuso basado en reglas  (SBRD)  cuando están son optimizados mediante  un  proceso  genético  multiobjetivo  considerando  precisión,  interpretabilidad  y  relevancia.  El  sistema experto  propuesto  se  valida  utilizando  nueve  conjuntos  de  datos,  dos  algoritmos difusos lingüísticos y dos dispersos, cuatro medidas de interpretabilidad y  dos formulaciones de relevancia de la regla.  En esta preocupación, se desarrolla un sistema experto basado en reglas difusas para  analizar diferentes puntos de vista de Interpretabilidad, Precisión y Relevancia, y las  pruebas estadísticas.  Los resultados revelan que el rendimiento del sistema experto propuesto es superior al  de las reglas de baja relevanciaDepartamento de Ingeniería de Sistemas y AutomáticaMáster en Investigación en Ingeniería de Procesos y Sistemas Industriale
    corecore