792 research outputs found

    Uncertainty and Interpretability Studies in Soft Computing with an Application to Complex Manufacturing Systems

    Get PDF
    In systems modelling and control theory, the benefits of applying neural networks have been extensively studied. Particularly in manufacturing processes, such as the prediction of mechanical properties of heat treated steels. However, modern industrial processes usually involve large amounts of data and a range of non-linear effects and interactions that might hinder their model interpretation. For example, in steel manufacturing the understanding of complex mechanisms that lead to the mechanical properties which are generated by the heat treatment process is vital. This knowledge is not available via numerical models, therefore an experienced metallurgist estimates the model parameters to obtain the required properties. This human knowledge and perception sometimes can be imprecise leading to a kind of cognitive uncertainty such as vagueness and ambiguity when making decisions. In system classification, this may be translated into a system deficiency - for example, small input changes in system attributes may result in a sudden and inappropriate change for class assignation. In order to address this issue, practitioners and researches have developed systems that are functional equivalent to fuzzy systems and neural networks. Such systems provide a morphology that mimics the human ability of reasoning via the qualitative aspects of fuzzy information rather by its quantitative analysis. Furthermore, these models are able to learn from data sets and to describe the associated interactions and non-linearities in the data. However, in a like-manner to neural networks, a neural fuzzy system may suffer from a lost of interpretability and transparency when making decisions. This is mainly due to the application of adaptive approaches for its parameter identification. Since the RBF-NN can be treated as a fuzzy inference engine, this thesis presents several methodologies that quantify different types of uncertainty and its influence on the model interpretability and transparency of the RBF-NN during its parameter identification. Particularly, three kind of uncertainty sources in relation to the RBF-NN are studied, namely: entropy, fuzziness and ambiguity. First, a methodology based on Granular Computing (GrC), neutrosophic sets and the RBF-NN is presented. The objective of this methodology is to quantify the hesitation produced during the granular compression at the low level of interpretability of the RBF-NN via the use of neutrosophic sets. This study also aims to enhance the disitnguishability and hence the transparency of the initial fuzzy partition. The effectiveness of the proposed methodology is tested against a real case study for the prediction of the properties of heat-treated steels. Secondly, a new Interval Type-2 Radial Basis Function Neural Network (IT2-RBF-NN) is introduced as a new modelling framework. The IT2-RBF-NN takes advantage of the functional equivalence between FLSs of type-1 and the RBF-NN so as to construct an Interval Type-2 Fuzzy Logic System (IT2-FLS) that is able to deal with linguistic uncertainty and perceptions in the RBF-NN rule base. This gave raise to different combinations when optimising the IT2-RBF-NN parameters. Finally, a twofold study for uncertainty assessment at the high-level of interpretability of the RBF-NN is provided. On the one hand, the first study proposes a new methodology to quantify the a) fuzziness and the b) ambiguity at each RU, and during the formation of the rule base via the use of neutrosophic sets theory. The aim of this methodology is to calculate the associated fuzziness of each rule and then the ambiguity related to each normalised consequence of the fuzzy rules that result from the overlapping and to the choice with one-to-many decisions respectively. On the other hand, a second study proposes a new methodology to quantify the entropy and the fuzziness that come out from the redundancy phenomenon during the parameter identification. To conclude this work, the experimental results obtained through the application of the proposed methodologies for modelling two well-known benchmark data sets and for the prediction of mechanical properties of heat-treated steels conducted to publication of three articles in two peer-reviewed journals and one international conference

    FIN-DM: finantsteenuste andmekaeve protsessi mudel

    Get PDF
    Andmekaeve hĂ”lmab reeglite kogumit, protsesse ja algoritme, mis vĂ”imaldavad ettevĂ”tetel iga pĂ€ev kogutud andmetest rakendatavaid teadmisi ammutades suurendada tulusid, vĂ€hendada kulusid, optimeerida tooteid ja kliendisuhteid ning saavutada teisi eesmĂ€rke. Andmekaeves ja -analĂŒĂŒtikas on vaja hĂ€sti mÀÀratletud metoodikat ja protsesse. Saadaval on mitu andmekaeve ja -analĂŒĂŒtika standardset protsessimudelit. KĂ”ige mĂ€rkimisvÀÀrsem ja laialdaselt kasutusele vĂ”etud standardmudel on CRISP-DM. Tegu on tegevusalast sĂ”ltumatu protsessimudeliga, mida kohandatakse sageli sektorite erinĂ”uetega. CRISP-DMi tegevusalast lĂ€htuvaid kohandusi on pakutud mitmes valdkonnas, kaasa arvatud meditsiini-, haridus-, tööstus-, tarkvaraarendus- ja logistikavaldkonnas. Seni pole aga mudelit kohandatud finantsteenuste sektoris, millel on omad valdkonnapĂ”hised erinĂ”uded. Doktoritöös kĂ€sitletakse seda lĂŒnka finantsteenuste sektoripĂ”hise andmekaeveprotsessi (FIN-DM) kavandamise, arendamise ja hindamise kaudu. Samuti uuritakse, kuidas kasutatakse andmekaeve standardprotsesse eri tegevussektorites ja finantsteenustes. Uurimise kĂ€igus tuvastati mitu tavapĂ€rase raamistiku kohandamise stsenaariumit. Lisaks ilmnes, et need meetodid ei keskendu piisavalt sellele, kuidas muuta andmekaevemudelid tarkvaratoodeteks, mida saab integreerida organisatsioonide IT-arhitektuuri ja Ă€riprotsessi. Peamised finantsteenuste valdkonnas tuvastatud kohandamisstsenaariumid olid seotud andmekaeve tehnoloogiakesksete (skaleeritavus), Ă€rikesksete (tegutsemisvĂ”ime) ja inimkesksete (diskrimineeriva mĂ”ju leevendus) aspektidega. SeejĂ€rel korraldati tegelikus finantsteenuste organisatsioonis juhtumiuuring, mis paljastas 18 tajutavat puudujÀÀki CRISP- DMi protsessis. Uuringu andmete ja tulemuste abil esitatakse doktoritöös finantsvaldkonnale kohandatud CRISP-DM nimega FIN-DM ehk finantssektori andmekaeve protsess (Financial Industry Process for Data Mining). FIN-DM laiendab CRISP-DMi nii, et see toetab privaatsust sĂ€ilitavat andmekaevet, ohjab tehisintellekti eetilisi ohte, tĂ€idab riskijuhtimisnĂ”udeid ja hĂ”lmab kvaliteedi tagamist kui osa andmekaeve elutsĂŒklisData mining is a set of rules, processes, and algorithms that allow companies to increase revenues, reduce costs, optimize products and customer relationships, and achieve other business goals, by extracting actionable insights from the data they collect on a day-to-day basis. Data mining and analytics projects require well-defined methodology and processes. Several standard process models for conducting data mining and analytics projects are available. Among them, the most notable and widely adopted standard model is CRISP-DM. It is industry-agnostic and often is adapted to meet sector-specific requirements. Industry- specific adaptations of CRISP-DM have been proposed across several domains, including healthcare, education, industrial and software engineering, logistics, etc. However, until now, there is no existing adaptation of CRISP-DM for the financial services industry, which has its own set of domain-specific requirements. This PhD Thesis addresses this gap by designing, developing, and evaluating a sector-specific data mining process for financial services (FIN-DM). The PhD thesis investigates how standard data mining processes are used across various industry sectors and in financial services. The examination identified number of adaptations scenarios of traditional frameworks. It also suggested that these approaches do not pay sufficient attention to turning data mining models into software products integrated into the organizations' IT architectures and business processes. In the financial services domain, the main discovered adaptation scenarios concerned technology-centric aspects (scalability), business-centric aspects (actionability), and human-centric aspects (mitigating discriminatory effects) of data mining. Next, an examination by means of a case study in the actual financial services organization revealed 18 perceived gaps in the CRISP-DM process. Using the data and results from these studies, the PhD thesis outlines an adaptation of CRISP-DM for the financial sector, named the Financial Industry Process for Data Mining (FIN-DM). FIN-DM extends CRISP-DM to support privacy-compliant data mining, to tackle AI ethics risks, to fulfill risk management requirements, and to embed quality assurance as part of the data mining life-cyclehttps://www.ester.ee/record=b547227

    Long-term learning for type-2 neural-fuzzy systems

    Get PDF
    The development of a new long-term learning framework for interval-valued neural-fuzzy systems is presented for the first time in this article. The need for such a framework is twofold: to address continuous batch learning of data sets, and to take advantage the extra degree of freedom that type-2 Fuzzy Logic systems offer for better model predictive ability. The presented long-term learning framework uses principles of granular computing (GrC) to capture information/knowledge from raw data in the form of interval-valued sets in order to build a computational mechanism that has the ability to adapt to new information in an additive and long-term learning fashion. The latter, is to accommodate new input–output mappings and new classes of data without significantly disturbing existing input–output mappings, therefore maintaining existing performance while creating and integrating new knowledge (rules). This is achieved via an iterative algorithmic process, which involves a two-step operation: iterative rule-base growth (capturing new knowledge) and iterative rule-base pruning (removing redundant knowledge) for type-2 rules. The two-step operation helps create a growing, but sustainable model structure. The performance of the proposed system is demonstrated using a number of well-known non-linear benchmark functions as well as a highly nonlinear multivariate real industrial case study. Simulation results show that the performance of the original model structure is maintained and it is comparable to the updated model's performance following the incremental learning routine. The study is concluded by evaluating the performance of the proposed framework in frequent and consecutive model updates where the balance between model accuracy and complexity is further assessed

    Evolving fuzzy and neuro-fuzzy approaches in clustering, regression, identification, and classification: A Survey

    Get PDF
    Major assumptions in computational intelligence and machine learning consist of the availability of a historical dataset for model development, and that the resulting model will, to some extent, handle similar instances during its online operation. However, in many real world applications, these assumptions may not hold as the amount of previously available data may be insufficient to represent the underlying system, and the environment and the system may change over time. As the amount of data increases, it is no longer feasible to process data efficiently using iterative algorithms, which typically require multiple passes over the same portions of data. Evolving modeling from data streams has emerged as a framework to address these issues properly by self-adaptation, single-pass learning steps and evolution as well as contraction of model components on demand and on the fly. This survey focuses on evolving fuzzy rule-based models and neuro-fuzzy networks for clustering, classification and regression and system identification in online, real-time environments where learning and model development should be performed incrementally. (C) 2019 Published by Elsevier Inc.Igor Ơkrjanc, Jose Antonio Iglesias and Araceli Sanchis would like to thank to the Chair of Excellence of Universidad Carlos III de Madrid, and the Bank of Santander Program for their support. Igor Ơkrjanc is grateful to Slovenian Research Agency with the research program P2-0219, Modeling, simulation and control. Daniel Leite acknowledges the Minas Gerais Foundation for Research and Development (FAPEMIG), process APQ-03384-18. Igor Ơkrjanc and Edwin Lughofer acknowledges the support by the ”LCM — K2 Center for Symbiotic Mechatronics” within the framework of the Austrian COMET-K2 program. Fernando Gomide is grateful to the Brazilian National Council for Scientific and Technological Development (CNPq) for grant 305906/2014-3

    Fuzzy Logic in Decision Support: Methods, Applications and Future Trends

    Get PDF
    During the last decades, the art and science of fuzzy logic have witnessed significant developments and have found applications in many active areas, such as pattern recognition, classification, control systems, etc. A lot of research has demonstrated the ability of fuzzy logic in dealing with vague and uncertain linguistic information. For the purpose of representing human perception, fuzzy logic has been employed as an effective tool in intelligent decision making. Due to the emergence of various studies on fuzzy logic-based decision-making methods, it is necessary to make a comprehensive overview of published papers in this field and their applications. This paper covers a wide range of both theoretical and practical applications of fuzzy logic in decision making. It has been grouped into five parts: to explain the role of fuzzy logic in decision making, we first present some basic ideas underlying different types of fuzzy logic and the structure of the fuzzy logic system. Then, we make a review of evaluation methods, prediction methods, decision support algorithms, group decision-making methods based on fuzzy logic. Applications of these methods are further reviewed. Finally, some challenges and future trends are given from different perspectives. This paper illustrates that the combination of fuzzy logic and decision making method has an extensive research prospect. It can help researchers to identify the frontiers of fuzzy logic in the field of decision making
    • 

    corecore