2,696 research outputs found

    Bounded Rationality and Heuristics in Humans and in Artificial Cognitive Systems

    Get PDF
    In this paper I will present an analysis of the impact that the notion of “bounded rationality”, introduced by Herbert Simon in his book “Administrative Behavior”, produced in the field of Artificial Intelligence (AI). In particular, by focusing on the field of Automated Decision Making (ADM), I will show how the introduction of the cognitive dimension into the study of choice of a rational (natural) agent, indirectly determined - in the AI field - the development of a line of research aiming at the realisation of artificial systems whose decisions are based on the adoption of powerful shortcut strategies (known as heuristics) based on “satisficing” - i.e. non optimal - solutions to problem solving. I will show how the “heuristic approach” to problem solving allowed, in AI, to face problems of combinatorial complexity in real-life situations and still represents an important strategy for the design and implementation of intelligent systems

    Managing Group Decision Making criteria values using Fuzzy Ontologies

    Get PDF
    Meeting: 8th International Conference on Information Technology and Quantitative Management (ITQM) - Developing Global Digital Economy after COVID-19Most of the available Multi-criteria Group Decision Making methods that deal with a high number of elements usually focus on managing scenarios that have high number of alternatives and/or experts. Nevertheless, there are also cases in which the number of criteria values is difficult for the experts to tackle. In this paper, a novel Group Decision Making method that employs Fuzzy Ontologies in order to deal with a high number of criteria values is presented. Our method allows the criteria values to be combined in order to generate a reduced set of criteria values that the experts can comfortably deal with. (C) 2021 The Authors. Published by Elsevier B.V.The authors would like to thank the Spanish State Research Agency through the project PID2019-103880RB-I00 / AEI / 10.13039/501100011033

    Fudge: Fuzzy ontology building with consensuated fuzzy datatypes

    Get PDF
    An important problem in Fuzzy OWL 2 ontology building is the definition of fuzzy membership functions for real-valued fuzzy sets (so-called fuzzy datatypes in Fuzzy OWL 2 terminology). In this paper, we present a tool, called Fudge, whose aim is to support the consensual creation of fuzzy datatypes by aggregating the specifications given by a group of experts. Fudge is freeware and currently supports several linguistic aggregation strategies, including the convex combination, linguistic OWA, weighted mean and fuzzy OWA, and easily allows to build others in. We also propose and have implemented two novel linguistic aggregation operators, based on a left recursive form of the convex combination and of the linguistic OWA

    An Extended Semantic Interoperability Model for Distributed Electronic Health Record Based on Fuzzy Ontology Semantics

    Get PDF
    Semantic interoperability of distributed electronic health record (EHR) systems is a crucial problem for querying EHR and machine learning projects. The main contribution of this paper is to propose and implement a fuzzy ontology-based semantic interoperability framework for distributed EHR systems. First, a separate standard ontology is created for each input source. Second, a unified ontology is created that merges the previously created ontologies. However, this crisp ontology is not able to answer vague or uncertain queries. We thirdly extend the integrated crisp ontology into a fuzzy ontology by using a standard methodology and fuzzy logic to handle this limitation. The used dataset includes identified data of 100 patients. The resulting fuzzy ontology includes 27 class, 58 properties, 43 fuzzy data types, 451 instances, 8376 axioms, 5232 logical axioms, 1216 declarative axioms, 113 annotation axioms, and 3204 data property assertions. The resulting ontology is tested using real data from the MIMIC-III intensive care unit dataset and real archetypes from openEHR. This fuzzy ontology-based system helps physicians accurately query any required data about patients from distributed locations using near-natural language queries. Domain specialists validated the accuracy and correctness of the obtained resultsThis work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIT) (NRF-2021R1A2B5B02002599)S

    Viewpoints on emergent semantics

    Get PDF
    Authors include:Philippe Cudr´e-Mauroux, and Karl Aberer (editors), Alia I. Abdelmoty, Tiziana Catarci, Ernesto Damiani, Arantxa Illaramendi, Robert Meersman, Erich J. Neuhold, Christine Parent, Kai-Uwe Sattler, Monica Scannapieco, Stefano Spaccapietra, Peter Spyns, and Guy De Tr´eWe introduce a novel view on how to deal with the problems of semantic interoperability in distributed systems. This view is based on the concept of emergent semantics, which sees both the representation of semantics and the discovery of the proper interpretation of symbols as the result of a self-organizing process performed by distributed agents exchanging symbols and having utilities dependent on the proper interpretation of the symbols. This is a complex systems perspective on the problem of dealing with semantics. We highlight some of the distinctive features of our vision and point out preliminary examples of its applicatio

    Ontology Population via NLP Techniques in Risk Management

    Get PDF
    In this paper we propose an NLP-based method for Ontology Population from texts and apply it to semi automatic instantiate a Generic Knowledge Base (Generic Domain Ontology) in the risk management domain. The approach is semi-automatic and uses a domain expert intervention for validation. The proposed approach relies on a set of Instances Recognition Rules based on syntactic structures, and on the predicative power of verbs in the instantiation process. It is not domain dependent since it heavily relies on linguistic knowledge. A description of an experiment performed on a part of the ontology of the PRIMA project (supported by the European community) is given. A first validation of the method is done by populating this ontology with Chemical Fact Sheets from Environmental Protection Agency . The results of this experiment complete the paper and support the hypothesis that relying on the predicative power of verbs in the instantiation process improves the performance.Information Extraction, Instance Recognition Rules, Ontology Population, Risk Management, Semantic Analysis

    Modeling and improving Spatial Data Infrastructure (SDI)

    Get PDF
    Spatial Data Infrastructure (SDI) development is widely known to be a challenging process owing to its complex and dynamic nature. Although great effort has been made to conceptually explain the complexity and dynamics of SDIs, few studies thus far have actually modeled these complexities. In fact, better modeling of SDI complexities will lead to more reliable plans for its development. A state-of-the-art simulation model of SDI development, hereafter referred to as SMSDI, was created by using the system dynamics (SD) technique. The SMSDI enables policy-makers to test various investment scenarios in different aspects of SDI and helps them to determine the optimum policy for further development of an SDI. This thesis begins with adaption of the SMSDI to a new case study in Tanzania by using the community of participant concept, and further development of the model is performed by using fuzzy logic. It is argued that the techniques and models proposed in this part of the study enable SDI planning to be conducted in a more reliable manner, which facilitates receiving the support of stakeholders for the development of SDI.Developing a collaborative platform such as SDI would highlight the differences among stakeholders including the heterogeneous data they produce and share. This makes the reuse of spatial data difficult mainly because the shared data need to be integrated with other datasets and used in applications that differ from those originally produced for. The integration of authoritative data and Volunteered Geographic Information (VGI), which has a lower level structure and production standards, is a new, challenging area. The second part of this study focuses on proposing techniques to improve the matching and integration of spatial datasets. It is shown that the proposed solutions, which are based on pattern recognition and ontology, can considerably improve the integration of spatial data in SDIs and enable the reuse or multipurpose usage of available data resources
    • …
    corecore