66 research outputs found

    A General Framework for Ordering Fuzzy Sets

    Full text link
    Abstract. Orderings and rankings of fuzzy sets have turned out to play a funda-mental role in various disciplines. Throughout the previous 25 years, a lot a different approaches to this issue have been introduced, ranging from rather simple ones to quite exotic ones. The aim of this paper is to present a new framework for com-paring fuzzy sets with respect to a general class of fuzzy orderings. This approach includes several known techniques based on generalizing the crisp linear ordering of real numbers by means of the extension principle, however, in its general form, it is applicable to any fuzzy subsets of any kind of universe for which a fuzzy ordering is known – no matter whether linear or partial.

    Fuzzy Logic Is Not Fuzzy: World-renowned Computer Scientist Lotfi A. Zadeh

    Get PDF
    In 1965 Lotfi A. Zadeh published "Fuzzy Sets", his pioneering and controversial paper, that now reaches almost 100,000 citations. All Zadeh’s papers were cited over 185,000 times. Starting from the ideas presented in that paper, Zadeh founded later the Fuzzy Logic theory, that proved to have useful applications, from consumer to industrial intelligent products. We are presenting general aspects of Zadeh’s contributions to the development of Soft Computing(SC) and Artificial Intelligence(AI), and also his important and early influence in the world and in Romania. Several early contributions in fuzzy sets theory were published by Romanian scientists, such as: Grigore C. Moisil (1968), Constantin V. Negoita & Dan A. Ralescu (1974), Dan Butnariu (1978). In this review we refer the papers published in "From Natural Language to Soft Computing: New Paradigms in Artificial Intelligence" (2008, Eds.: L.A. Zadeh, D. Tufis, F.G. Filip, I. Dzitac), and also from the two special issues (SI) of the International Journal of Computers Communications & Control (IJCCC, founded in 2006 by I. Dzitac, F.G. Filip & M.J. Manolescu; L.A. Zadeh joined in 2008 to editorial board). In these two SI, dedicated to the 90th birthday of Lotfi A. Zadeh (2011), and to the 50th anniversary of "Fuzzy Sets" (2015), were published some papers authored by scientists from Algeria, Belgium, Canada, Chile, China, Hungary, Greece, Germany, Japan, Lithuania, Mexico, Pakistan, Romania, Saudi Arabia, Serbia, Spain, Taiwan, UK and USA

    Interoperability of Traffic Infrastructure Planning and Geospatial Information Systems

    Get PDF
    Building Information Modelling (BIM) as a Model-based design facilitates to investigate multiple solutions in the infrastructure planning process. The most important reason for implementing model-based design is to help designers and to increase communication between different design parties. It decentralizes and coordinates team collaboration and facilitates faster and lossless project data exchange and management across extended teams and external partners in project lifecycle. Infrastructure are fundamental facilities, services, and installations needed for the functioning of a community or society, such as transportation, roads, communication systems, water and power networks, as well as power plants. Geospatial Information Systems (GIS) as the digital representation of the world are systems for maintaining, managing, modelling, analyzing, and visualizing of the world data including infrastructure. High level infrastructure suits mostly facilitate to analyze the infrastructure design based on the international or user defined standards. Called regulation1-based design, this minimizes errors, reduces costly design conflicts, increases time savings and provides consistent project quality, yet mostly in standalone solutions. Tasks of infrastructure usually require both model based and regulation based design packages. Infrastructure tasks deal with cross-domain information. However, the corresponding data is split in several domain models. Besides infrastructure projects demand a lot of decision makings on governmental as well as on private level considering different data models. Therefore lossless flow of project data as well as documents like regulations across project team, stakeholders, governmental and private level is highly important. Yet infrastructure projects have largely been absent from product modelling discourses for a long time. Thus, as will be explained in chapter 2 interoperability is needed in infrastructure processes. Multimodel (MM) is one of the interoperability methods which enable heterogeneous data models from various domains get bundled together into a container keeping their original format. Existing interoperability methods including existing MM solutions can’t satisfactorily fulfill the typical demands of infrastructure information processes like dynamic data resources and a huge amount of inter model relations. Therefore chapter 3 concept of infrastructure information modelling investigates a method for loose and rule based coupling of exchangeable heterogeneous information spaces. This hypothesis is an extension for the existing MM to a rule-based Multimodel named extended Multimodel (eMM) with semantic rules – instead of static links. The semantic rules will be used to describe relations between data elements of various models dynamically in a link-database. Most of the confusion about geospatial data models arises from their diversity. In some of these data models spatial IDs are the basic identities of entities and in some other data models there are no IDs. That is why in the geospatial data, data structure is more important than data models. There are always spatial indexes that enable accessing to the geodata. The most important unification of data models involved in infrastructure projects is the spatiality. Explained in chapter 4 the method of infrastructure information modelling for interoperation in spatial domains generate interlinks through spatial identity of entities. Match finding through spatial links enables any kind of data models sharing spatial property get interlinked. Through such spatial links each entity receives the spatial information from other data models which is related to the target entity due to sharing equivalent spatial index. This information will be the virtual properties for the object. The thesis uses Nearest Neighborhood algorithm for spatial match finding and performs filtering and refining approaches. For the abstraction of the spatial matching results hierarchical filtering techniques are used for refining the virtual properties. These approaches focus on two main application areas which are product model and Level of Detail (LoD). For the eMM suggested in this thesis a rule based interoperability method between arbitrary data models of spatial domain has been developed. The implementation of this method enables transaction of data in spatial domains run loss less. The system architecture and the implementation which has been applied on the case study of this thesis namely infrastructure and geospatial data models are described in chapter 5. Achieving afore mentioned aims results in reducing the whole project lifecycle costs, increasing reliability of the comprehensive fundamental information, and consequently in independent, cost-effective, aesthetically pleasing, and environmentally sensitive infrastructure design.:ABSTRACT 4 KEYWORDS 7 TABLE OF CONTENT 8 LIST OF FIGURES 9 LIST OF TABLES 11 LIST OF ABBREVIATION 12 INTRODUCTION 13 1.1. A GENERAL VIEW 14 1.2. PROBLEM STATEMENT 15 1.3. OBJECTIVES 17 1.4. APPROACH 18 1.5. STRUCTURE OF THESIS 18 INTEROPERABILITY IN INFRASTRUCTURE ENGINEERING 20 2.1. STATE OF INTEROPERABILITY 21 2.1.1. Interoperability of GIS and BIM 23 2.1.2. Interoperability of GIS and Infrastructure 25 2.2. MAIN CHALLENGES AND RELATED WORK 27 2.3. INFRASTRUCTURE MODELING IN GEOSPATIAL CONTEXT 29 2.3.1. LamdXML: Infrastructure Data Standards 32 2.3.2. CityGML: Geospatial Data Standards 33 2.3.3. LandXML and CityGML 36 2.4. INTEROPERABILITY AND MULTIMODEL TECHNOLOGY 39 2.5. LIMITATIONS OF EXISTING APPROACHES 41 INFRASTRUCTURE INFORMATION MODELLING 44 3.1. MULTI MODEL FOR GEOSPATIAL AND INFRASTRUCTURE DATA MODELS 45 3.2. LINKING APPROACH, QUERYING AND FILTERING 48 3.2.1. Virtual Properties via Link Model 49 3.3. MULTI MODEL AS AN INTERDISCIPLINARY METHOD 52 3.4. USING LEVEL OF DETAIL (LOD) FOR FILTERING 53 SPATIAL MODELLING AND PROCESSING 58 4.1. SPATIAL IDENTIFIERS 59 4.1.1. Spatial Indexes 60 4.1.2. Tree-Based Spatial Indexes 61 4.2. NEAREST NEIGHBORHOOD AS A BASIC LINK METHOD 63 4.3. HIERARCHICAL FILTERING 70 4.4. OTHER FUNCTIONAL LINK METHODS 75 4.5. ADVANCES AND LIMITATIONS OF FUNCTIONAL LINK METHODS 76 IMPLEMENTATION OF THE PROPOSED IIM METHOD 77 5.1. IMPLEMENTATION 78 5.2. CASE STUDY 83 CONCLUSION 89 6.1. SUMMERY 90 6.2. DISCUSSION OF RESULTS 92 6.3. FUTURE WORK 93 BIBLIOGRAPHY 94 7.1. BOOKS AND PAPERS 95 7.2. WEBSITES 10

    A Neutrosophic Clinical Decision-Making System for Cardiovascular Diseases Risk Analysis

    Get PDF
    Cardiovascular diseases are the leading cause of death worldwide. Early diagnosis of heart disease can reduce this large number of deaths so that treatment can be carried out. Many decision-making systems have been developed, but they are too complex for medical professionals. To target these objectives, we develop an explainable neutrosophic clinical decision-making system for the timely diagnose of cardiovascular disease risk. We make our system transparent and easy to understand with the help of explainable artificial intelligence techniques so that medical professionals can easily adopt this system. Our system is taking thirtyfive symptoms as input parameters, which are, gender, age, genetic disposition, smoking, blood pressure, cholesterol, diabetes, body mass index, depression, unhealthy diet, metabolic disorder, physical inactivity, pre-eclampsia, rheumatoid arthritis, coffee consumption, pregnancy, rubella, drugs, tobacco, alcohol, heart defect, previous surgery/injury, thyroid, sleep apnea, atrial fibrillation, heart history, infection, homocysteine level, pericardial cysts, marfan syndrome, syphilis, inflammation, clots, cancer, and electrolyte imbalance and finds out the risk of coronary artery disease, cardiomyopathy, congenital heart disease, heart attack, heart arrhythmia, peripheral artery disease, aortic disease, pericardial disease, deep vein thrombosis, heart valve disease, and heart failure. There are five main modules of the system, which are neutrosophication, knowledge base, inference engine, de-neutrosophication, and explainability. To demonstrate the complete working of our system, we design an algorithm and calculates its time complexity. We also present a new de-neutrosophication formula, and give comparison of our the results with existing methods

    Dynamic Fuzzy Rule Interpolation

    Get PDF

    Complex fermatean fuzzy N-soft sets: a new hybrid model with applications

    Get PDF
    [EN] Decision-making methods play an important role in the real-life of human beings and consist of choosing the best optionsfrom a set of possible choices. This paper proposes the notion of complex Fermatean fuzzy N-soft set (CFFNSf S) which, bymeans of ranking parameters, is capable of handling two-dimensional information related to the degree of satisfaction anddissatisfaction implicit in the nature of human decisions. We define the fundamental set-theoretic operations of CFFNSf Sand elaborate the CFFSf S associated with threshold. The algebraic and Yager operations on CFFNSf numbers are alsodefined. Several algorithms are proposed to demonstrate the applicability of CFFNSf S to multi-attribute decision making.The advanced algorithms are described and accomplished by several numerical examples. Then, a comparative study manifests the validity, feasibility, and reliability of the proposed model. This method is compared with the Fermatean fuzzy Yager weighted geometric (FFYwG) and the Fermatean fuzzy Yager weighted average (FFYwA) operators. Further, wedeveloped a remarkable CFFNSf-TOPSIS approach by applying innovative CFFNSf weighted average operator anddistance measure. The presented technique is fantastically designed for the classification of the most favorable alternativeby examining the closeness of all available choices from particular ideal solutions. Afterward, we demonstrate theamenability of the initiated approach by analyzing its tremendous potential to select the best city in the USA for farming. An integrated comparative analysis with existing Fermatean fuzzy TOPSIS technique is rendered to certify the terrificcapability of the established approach. Further, we decisively investigate the rationality and reliability of the presentedCFFNSf S and CFFNSf-TOPSIS approach by highlighting its advantages over the existent models and TOPSIS approaches.Finally, we holistically describe the conclusion of the whole work.Publicación en abierto financiada por el Consorcio de Bibliotecas Universitarias de Castilla y León (BUCLE), con cargo al Programa Operativo 2014ES16RFOP009 FEDER 2014-2020 DE CASTILLA Y LEÓN, Actuación:20007-CL - Apoyo Consorcio BUCLE

    Data quality issues in electronic health records for large-scale databases

    Get PDF
    Data Quality (DQ) in Electronic Health Records (EHRs) is one of the core functions that play a decisive role to improve the healthcare service quality. The DQ issues in EHRs are a noticeable trend to improve the introduction of an adaptive framework for interoperability and standards in Large-Scale Databases (LSDB) management systems. Therefore, large data communications are challenging in the traditional approaches to satisfy the needs of the consumers, as data is often not capture directly into the Database Management Systems (DBMS) in a seasonably enough fashion to enable their subsequent uses. In addition, large data plays a vital role in containing plenty of treasures for all the fields in the DBMS. EHRs technology provides portfolio management systems that allow HealthCare Organisations (HCOs) to deliver a higher quality of care to their patients than that which is possible with paper-based records. EHRs are in high demand for HCOs to run their daily services as increasing numbers of huge datasets occur every day. Efficient EHR systems reduce the data redundancy as well as the system application failure and increase the possibility to draw all necessary reports. However, one of the main challenges in developing efficient EHR systems is the inherent difficulty to coherently manage data from diverse heterogeneous sources. It is practically challenging to integrate diverse data into a global schema, which satisfies the need of users. The efficient management of EHR systems using an existing DBMS present challenges because of incompatibility and sometimes inconsistency of data structures. As a result, no common methodological approach is currently in existence to effectively solve every data integration problem. The challenges of the DQ issue raised the need to find an efficient way to integrate large EHRs from diverse heterogeneous sources. To handle and align a large dataset efficiently, the hybrid algorithm method with the logical combination of Fuzzy-Ontology along with a large-scale EHRs analysis platform has shown the results in term of improved accuracy. This study investigated and addressed the raised DQ issues to interventions to overcome these barriers and challenges, including the provision of EHRs as they pertain to DQ and has combined features to search, extract, filter, clean and integrate data to ensure that users can coherently create new consistent data sets. The study researched the design of a hybrid method based on Fuzzy-Ontology with performed mathematical simulations based on the Markov Chain Probability Model. The similarity measurement based on dynamic Hungarian algorithm was followed by the Design Science Research (DSR) methodology, which will increase the quality of service over HCOs in adaptive frameworks

    Fuzzy Logic

    Get PDF
    Fuzzy Logic is becoming an essential method of solving problems in all domains. It gives tremendous impact on the design of autonomous intelligent systems. The purpose of this book is to introduce Hybrid Algorithms, Techniques, and Implementations of Fuzzy Logic. The book consists of thirteen chapters highlighting models and principles of fuzzy logic and issues on its techniques and implementations. The intended readers of this book are engineers, researchers, and graduate students interested in fuzzy logic systems

    Soft Computing

    Get PDF
    Soft computing is used where a complex problem is not adequately specified for the use of conventional math and computer techniques. Soft computing has numerous real-world applications in domestic, commercial and industrial situations. This book elaborates on the most recent applications in various fields of engineering
    • …
    corecore