2,775 research outputs found

    Automatic Semantic Causal Map Integration

    Get PDF
    Causal map integration is helpful to broaden group member’s eyesight and sheds insight on the detection of overall group’s cognition tendencies. However the existing causal map integration approaches are either based on human intervention mechanism that is criticized with researcher bias, or based on syntactic mechanism that lacks of semantic. In order to improve the current causal map integration methodology and practice, this study proposes the conceptualization and formalization of an innovative causal map integration approach, automatic semantic causal map integration, grounded on the Sowa’s Conceptual Graph Theory and Kosko’s Fuzzy Knowledge Combination Theory. The system prototype with an example is also illustrated

    An Introduction to Ontology

    Get PDF
    Analytical philosophy of the last one hundred years has been heavily influenced by a doctrine to the effect that one can arrive at a correct ontology by paying attention to certain superficial (syntactic) features of first-order predicate logic as conceived by Frege and Russell. More specifically, it is a doctrine to the effect that the key to the ontological structure of reality is captured syntactically in the ‘Fa’ (or, in more sophisticated versions, in the ‘Rab’) of first-order logic, where ‘F’ stands for what is general in reality and ‘a’ for what is individual. Hence “f(a)ntology”. Because predicate logic has exactly two syntactically different kinds of referring expressions—‘F’, ‘G’, ‘R’, etc., and ‘a’, ‘b’, ‘c’, etc.—so reality must consist of exactly two correspondingly different kinds of entity: the general (properties, concepts) and the particular (things, objects), the relation between these two kinds of entity being revealed in the predicate-argument structure of atomic formulas in first-order logic

    Customizable tubular model for n-furcating blood vessels and its application to 3D reconstruction of the cerebrovascular system

    Get PDF
    Understanding the 3D cerebral vascular network is one of the pressing issues impacting the diagnostics of various systemic disorders and is helpful in clinical therapeutic strategies. Unfortunately, the existing software in the radiological workstation does not meet the expectations of radiologists who require a computerized system for detailed, quantitative analysis of the human cerebrovascular system in 3D and a standardized geometric description of its components. In this study, we show a method that uses 3D image data from magnetic resonance imaging with contrast to create a geometrical reconstruction of the vessels and a parametric description of the reconstructed segments of the vessels. First, the method isolates the vascular system using controlled morphological growing and performs skeleton extraction and optimization. Then, around the optimized skeleton branches, it creates tubular objects optimized for quality and accuracy of matching with the originally isolated vascular data. Finally, it optimizes the joints on n-furcating vessel segments. As a result, the algorithm gives a complete description of shape, position in space, position relative to other segments, and other anatomical structures of each cerebrovascular system segment. Our method is highly customizable and in principle allows reconstructing vascular structures from any 2D or 3D data. The algorithm solves shortcomings of currently available methods including failures to reconstruct the vessel mesh in the proximity of junctions and is free of mesh collisions in high curvature vessels. It also introduces a number of optimizations in the vessel skeletonization leading to a more smooth and more accurate model of the vessel network. We have tested the method on 20 datasets from the public magnetic resonance angiography image database and show that the method allows for repeatable and robust segmentation of the vessel network and allows to compute vascular lateralization indices. Graphical abstract: [Figure not available: see fulltext.]</p

    Error processes in the integration of digital cartographic data in geographic information systems.

    Get PDF
    Errors within a Geographic Information System (GIS) arise from several factors. In the first instance receiving data from a variety of different sources results in a degree of incompatibility between such information. Secondly, the very processes used to acquire the information into the GIS may in fact degrade the quality of the data. If geometric overlay (the very raison d'etre of many GISs) is to be performed, such inconsistencies need to be carefully examined and dealt with. A variety of techniques exist for the user to eliminate such problems, but all of these tend to rely on the geometry of the information, rather than on its meaning or nature. This thesis explores the introduction of error into GISs and the consequences this has for any subsequent data analysis. Techniques for error removal at the overlay stage are also examined and improved solutions are offered. Furthermore, the thesis also looks at the role of the data model and the potential detrimental effects this can have, in forcing the data to be organised into a pre-defined structure

    Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI

    Get PDF
    In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.Basque GovernmentConsolidated Research Group MATHMODE - Department of Education of the Basque Government IT1294-19Spanish GovernmentEuropean Commission TIN2017-89517-PBBVA Foundation through its Ayudas Fundacion BBVA a Equipos de Investigacion Cientifica 2018 call (DeepSCOP project)European Commission 82561

    Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

    Get PDF
    In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability

    The Participatory Potential of Fuzzy Cognitive Mapping in The Context of Harmful Algal Blooms in Peru

    Get PDF
    The involvement of non-academic actors in research has become a key characteristic of sustainability studies. As part of this trend, modellers increasingly turn to Participatory Modelling to incorporate stakeholders' knowledge, perceptions, norms, and values in the development of formalized, shared representations of social-ecological systems. While stakeholder participation has been shown to have many advantages, its limits are not adequately discussed in the contemporary Participatory Modeling literature. In particular, there is a lack of engagement with insights from fields that have a long participatory research tradition, such as development studies. To address this gap, the thesis employs Fuzzy Cognitive Mapping (FCM), a widely employed form of PM, in a case study in Peru, aiming to map the socio-ecological drivers, impacts, and related adaptation strategies in the context of Harmful Algal Blooms involving diverse groups of local stakeholders. Subsequently, the thesis critically reflects on the participatory knowledge production process, drawing on sociology and development studies literature. By identifying and discussing the limitations of the participatory approach within this specific case study, the thesis aims to contribute to the development of best practices specific to FCM

    Comparative Uncertainty Visualization for High-Level Analysis of Scalar- and Vector-Valued Ensembles

    Get PDF
    With this thesis, I contribute to the research field of uncertainty visualization, considering parameter dependencies in multi valued fields and the uncertainty of automated data analysis. Like uncertainty visualization in general, both of these fields are becoming more and more important due to increasing computational power, growing importance and availability of complex models and collected data, and progress in artificial intelligence. I contribute in the following application areas: Uncertain Topology of Scalar Field Ensembles. The generalization of topology-based visualizations to multi valued data involves many challenges. An example is the comparative visualization of multiple contour trees, complicated by the random nature of prevalent contour tree layout algorithms. I present a novel approach for the comparative visualization of contour trees - the Fuzzy Contour Tree. Uncertain Topological Features in Time-Dependent Scalar Fields. Tracking features in time-dependent scalar fields is an active field of research, where most approaches rely on the comparison of consecutive time steps. I created a more holistic visualization for time-varying scalar field topology by adapting Fuzzy Contour Trees to the time-dependent setting. Uncertain Trajectories in Vector Field Ensembles. Visitation maps are an intuitive and well-known visualization of uncertain trajectories in vector field ensembles. For large ensembles, visitation maps are not applicable, or only with extensive time requirements. I developed Visitation Graphs, a new representation and data reduction method for vector field ensembles that can be calculated in situ and is an optimal basis for the efficient generation of visitation maps. This is accomplished by bringing forward calculation times to the pre-processing. Visually Supported Anomaly Detection in Cyber Security. Numerous cyber attacks and the increasing complexity of networks and their protection necessitate the application of automated data analysis in cyber security. Due to uncertainty in automated anomaly detection, the results need to be communicated to analysts to ensure appropriate reactions. I introduce a visualization system combining device readings and anomaly detection results: the Security in Process System. To further support analysts I developed an application agnostic framework that supports the integration of knowledge assistance and applied it to the Security in Process System. I present this Knowledge Rocks Framework, its application and the results of evaluations for both, the original and the knowledge assisted Security in Process System. For all presented systems, I provide implementation details, illustrations and applications

    An Analysis of the Insertion of Virtual Players in GMABS Methodology Using the Vip-JogoMan Prototype

    Get PDF
    The GMABS (Games and Multi-Agent-Based Simulation) methodology was created from the integration of RPG and MABS techniques. This methodology links the dynamic capacity of MABS (Multi-Agent-Based Simulation) and the discussion and learning capacity of RPG (Role-Playing Games). Using GMABS, we have developed two prototypes in the natural resources management domain. The first prototype, called JogoMan (Adamatti et. al, 2005), is a paper-based game: all players need to be physically present in the same place and time, and there is a minimum needed number of participants to play the game. In order to avoid this constraint, we have built a second prototype, called ViP-JogoMan (Adamatti et. al, 2007), which is an extension of the first one. This second game enables the insertion of virtual players that can substitute some real players in the game. These virtual players can partially mime real behaviors and capture autonomy, social abilities, reaction and adaptation of the real players. We have chosen the BDI architecture to model these virtual players, since its paradigm is based on folk psychology; hence, its core concepts easily map the language that people use to describe their reasoning and actions in everyday life. ViP-JogoMan is a computer-based game, in which people play via Web, players can be in different places and it does not have a hard constraint regarding the minimum number of real players. Our aim in this paper is to present some test results obtained with both prototypes, as well as to present a preliminary discussion on how the insertion of virtual players has affected the game results.Role-Playing Games, Multi-Agent Based Simulation, Natural Resources, Virtual Players

    Meta-learning computational intelligence architectures

    Get PDF
    In computational intelligence, the term \u27memetic algorithm\u27 has come to be associated with the algorithmic pairing of a global search method with a local search method. In a sociological context, a \u27meme\u27 has been loosely defined as a unit of cultural information, the social analog of genes for individuals. Both of these definitions are inadequate, as \u27memetic algorithm\u27 is too specific, and ultimately a misnomer, as much as a \u27meme\u27 is defined too generally to be of scientific use. In this dissertation the notion of memes and meta-learning is extended from a computational viewpoint and the purpose, definitions, design guidelines and architecture for effective meta-learning are explored. The background and structure of meta-learning architectures is discussed, incorporating viewpoints from psychology, sociology, computational intelligence, and engineering. The benefits and limitations of meme-based learning are demonstrated through two experimental case studies -- Meta-Learning Genetic Programming and Meta- Learning Traveling Salesman Problem Optimization. Additionally, the development and properties of several new algorithms are detailed, inspired by the previous case-studies. With applications ranging from cognitive science to machine learning, meta-learning has the potential to provide much-needed stimulation to the field of computational intelligence by providing a framework for higher order learning --Abstract, page iii
    corecore