2,445 research outputs found

    Information Extraction from Text – Dealing with Imprecise Data

    Get PDF

    Examining the Modelling Capabilities of Defeasible Argumentation and non-Monotonic Fuzzy Reasoning

    Get PDF
    Knowledge-representation and reasoning methods have been extensively researched within Artificial Intelligence. Among these, argumentation has emerged as an ideal paradigm for inference under uncertainty with conflicting knowledge. Its value has been predominantly demonstrated via analyses of the topological structure of graphs of arguments and its formal properties. However, limited research exists on the examination and comparison of its inferential capacity in real-world modelling tasks and against other knowledge-representation and non-monotonic reasoning methods. This study is focused on a novel comparison between defeasible argumentation and non-monotonic fuzzy reasoning when applied to the representation of the ill-defined construct of human mental workload and its assessment. Different argument-based and non-monotonic fuzzy reasoning models have been designed considering knowledge-bases of incremental complexity containing uncertain and conflicting information provided by a human reasoner. Findings showed how their inferences have a moderate convergent and face validity when compared respectively to those of an existing baseline instrument for mental workload assessment, and to a perception of mental workload self-reported by human participants. This confirmed how these models also reasonably represent the construct under consideration. Furthermore, argument-based models had on average a lower mean squared error against the self-reported perception of mental workload when compared to fuzzy-reasoning models and the baseline instrument. The contribution of this research is to provide scholars, interested in formalisms on knowledge-representation and non-monotonic reasoning, with a novel approach for empirically comparing their inferential capacity

    Fuzzy-based Propagation of Prior Knowledge to Improve Large-Scale Image Analysis Pipelines

    Get PDF
    Many automatically analyzable scientific questions are well-posed and offer a variety of information about the expected outcome a priori. Although often being neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and the direct information about the ambiguity inherent in the extracted data. We present a new concept for the estimation and propagation of uncertainty involved in image analysis operators. This allows using simple processing operators that are suitable for analyzing large-scale 3D+t microscopy images without compromising the result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it enhance the result quality of various processing operators. All presented concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. Furthermore, the functionality of the proposed approach is validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. Especially, the automated analysis of terabyte-scale microscopy data will benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. The generality of the concept, however, makes it also applicable to practically any other field with processing strategies that are arranged as linear pipelines.Comment: 39 pages, 12 figure

    Representing archaeological uncertainty in cultural informatics

    Get PDF
    This thesis sets out to explore, describe, quantify, and visualise uncertainty in a cultural informatics context, with a focus on archaeological reconstructions. For quite some time, archaeologists and heritage experts have been criticising the often toorealistic appearance of three-dimensional reconstructions. They have been highlighting one of the unique features of archaeology: the information we have on our heritage will always be incomplete. This incompleteness should be reflected in digitised reconstructions of the past. This criticism is the driving force behind this thesis. The research examines archaeological theory and inferential process and provides insight into computer visualisation. It describes how these two areas, of archaeology and computer graphics, have formed a useful, but often tumultuous, relationship through the years. By examining the uncertainty background of disciplines such as GIS, medicine, and law, the thesis postulates that archaeological visualisation, in order to mature, must move towards archaeological knowledge visualisation. Three sequential areas are proposed through this thesis for the initial exploration of archaeological uncertainty: identification, quantification and modelling. The main contributions of the thesis lie in those three areas. Firstly, through the innovative design, distribution, and analysis of a questionnaire, the thesis identifies the importance of uncertainty in archaeological interpretation and discovers potential preferences among different evidence types. Secondly, the thesis uniquely analyses and evaluates, in relation to archaeological uncertainty, three different belief quantification models. The varying ways that these mathematical models work, are also evaluated through simulated experiments. Comparison of results indicates significant convergence between the models. Thirdly, a novel approach to archaeological uncertainty and evidence conflict visualisation is presented, influenced by information visualisation schemes. Lastly, suggestions for future semantic extensions to this research are presented through the design and development of new plugins to a search engine

    A review of applications of fuzzy sets to safety and reliability engineering

    Get PDF
    Safety and reliability are rigorously assessed during the design of dependable systems. Probabilistic risk assessment (PRA) processes are comprehensive, structured and logical methods widely used for this purpose. PRA approaches include, but not limited to Fault Tree Analysis (FTA), Failure Mode and Effects Analysis (FMEA), and Event Tree Analysis (ETA). In conventional PRA, failure data about components is required for the purposes of quantitative analysis. In practice, it is not always possible to fully obtain this data due to unavailability of primary observations and consequent scarcity of statistical data about the failure of components. To handle such situations, fuzzy set theory has been successfully used in novel PRA approaches for safety and reliability evaluation under conditions of uncertainty. This paper presents a review of fuzzy set theory based methodologies applied to safety and reliability engineering, which include fuzzy FTA, fuzzy FMEA, fuzzy ETA, fuzzy Bayesian networks, fuzzy Markov chains, and fuzzy Petri nets. Firstly, we describe relevant fundamentals of fuzzy set theory and then we review applications of fuzzy set theory to system safety and reliability analysis. The review shows the context in which each technique may be more appropriate and highlights the overall potential usefulness of fuzzy set theory in addressing uncertainty in safety and reliability engineering

    Interface Contracts for Workflow+ Models: an Analysis of Uncertainty across Models

    Get PDF
    Workflow models are used to rigorously specify and reason about diverse types of processes. The Workflow+ (WF+) framework has been developed to support unified modelling of the control and data in processes that can be used to derive assurance cases that support certification. However, WF+ is limited in its support for precise contracts on workflow models, which can enable powerful forms of static analysis and reasoning. In this paper we propose a mechanism for adding interface contracts to WF+ models, which can thereafter be applied to tracing and reasoning about the uncertainty that arises when combining heterogeneous models. We specifically explore this in terms of design models and assurance case models. We argue that some of the key issues in managing some types of uncertainty can be partly addressed by use of interface contract
    corecore