3,315 research outputs found

    Verifying a medical protocol with temporal graphs: The case of a nosocomial disease

    Get PDF
    Objective: Our contribution focuses on the implementation of a formal verification approach for medical protocols with graphical temporal reasoning paths to facilitate the understanding of verification steps. Materials and methods: Formal medical guideline specifications and background knowledge are represented through conceptual graphs, and reasoning is based on graph homomorphism. These materials explain the underlying principles or rationale that guide the functioning of verifications. Results: An illustration of this proposal is made using a medical protocol defining guidelines for the monitoring and prevention of nosocomial infections. Such infections, which are acquired in the hospital, increasemorbidity andmortality and add noticeably to economic burden. An evaluation of the use of the graphical verification found that this method aids in the improvement of both clinical knowledge and the quality of actions made. Discussion: As conceptual graphs, representations based on diagrams can be translated into computational tree logic. However, diagrams are much more natural and explicitly human, emphasizing a theoretical and practical consistency. Conclusion: The proposed approach allows for the visualmodeling of temporal reasoning and a formalization of knowledge that can assist in the diagnosis and treatment of nosocomial infections and some clinical problems. This is the first time that one emphasizes the temporal situation modeling in conceptual graphs. It will also deliver a formal verification method for clinical guideline analyses

    On Cognitive Preferences and the Plausibility of Rule-based Models

    Get PDF
    It is conventional wisdom in machine learning and data mining that logical models such as rule sets are more interpretable than other models, and that among such rule-based models, simpler models are more interpretable than more complex ones. In this position paper, we question this latter assumption by focusing on one particular aspect of interpretability, namely the plausibility of models. Roughly speaking, we equate the plausibility of a model with the likeliness that a user accepts it as an explanation for a prediction. In particular, we argue that, all other things being equal, longer explanations may be more convincing than shorter ones, and that the predominant bias for shorter models, which is typically necessary for learning powerful discriminative models, may not be suitable when it comes to user acceptance of the learned models. To that end, we first recapitulate evidence for and against this postulate, and then report the results of an evaluation in a crowd-sourcing study based on about 3.000 judgments. The results do not reveal a strong preference for simple rules, whereas we can observe a weak preference for longer rules in some domains. We then relate these results to well-known cognitive biases such as the conjunction fallacy, the representative heuristic, or the recogition heuristic, and investigate their relation to rule length and plausibility.Comment: V4: Another rewrite of section on interpretability to clarify focus on plausibility and relation to interpretability, comprehensibility, and justifiabilit

    Reasoning on Controversial Science Issues in Science Education and Science Communication

    Get PDF
    The ability to make evidence-based decisions, and hence to reason on questions concerning scientific and societal aspects, is a crucial goal in science education and science communication. However, science denial poses a constant challenge for society and education. Controversial science issues (CSI) encompass scientific knowledge rejected by the public as well as socioscientific issues, i.e., societal issues grounded in science that are frequently applied to science education. Generating evidence-based justifications for claims is central in scientific and informal reasoning. This study aims to describe attitudes and their justifications within the argumentations of a random online sample (N = 398) when reasoning informally on selected CSI. Following a deductive-inductive approach and qualitative content analysis of written open-ended answers, we identified five types of justifications based on a fine-grained category system. The results suggest a topic-specificity of justifications referring to specific scientific data, while justifications appealing to authorities tend to be common across topics. Subjective, and therefore normative, justifications were slightly related to conspiracy ideation and a general rejection of the scientific consensus. The category system could be applied to other CSI topics to help clarify the relation between scientific and informal reasoning in science education and communication.Peer Reviewe

    Visualization for Recommendation Explainability: A Survey and New Perspectives

    Full text link
    Providing system-generated explanations for recommendations represents an important step towards transparent and trustworthy recommender systems. Explainable recommender systems provide a human-understandable rationale for their outputs. Over the last two decades, explainable recommendation has attracted much attention in the recommender systems research community. This paper aims to provide a comprehensive review of research efforts on visual explanation in recommender systems. More concretely, we systematically review the literature on explanations in recommender systems based on four dimensions, namely explanation goal, explanation scope, explanation style, and explanation format. Recognizing the importance of visualization, we approach the recommender system literature from the angle of explanatory visualizations, that is using visualizations as a display style of explanation. As a result, we derive a set of guidelines that might be constructive for designing explanatory visualizations in recommender systems and identify perspectives for future work in this field. The aim of this review is to help recommendation researchers and practitioners better understand the potential of visually explainable recommendation research and to support them in the systematic design of visual explanations in current and future recommender systems.Comment: Updated version Nov. 2023, 36 page

    Logic-based assessment of the compatibility of UMLS ontology sources

    Get PDF
    Background: The UMLS Metathesaurus (UMLS-Meta) is currently the most comprehensive effort for integrating independently-developed medical thesauri and ontologies. UMLS-Meta is being used in many applications, including PubMed and ClinicalTrials.gov. The integration of new sources combines automatic techniques, expert assessment, and auditing protocols. The automatic techniques currently in use, however, are mostly based on lexical algorithms and often disregard the semantics of the sources being integrated. Results: In this paper, we argue that UMLS-Meta’s current design and auditing methodologies could be significantly enhanced by taking into account the logic-based semantics of the ontology sources. We provide empirical evidence suggesting that UMLS-Meta in its 2009AA version contains a significant number of errors; these errors become immediately apparent if the rich semantics of the ontology sources is taken into account, manifesting themselves as unintended logical consequences that follow from the ontology sources together with the information in UMLS-Meta. We then propose general principles and specific logic-based techniques to effectively detect and repair such errors. Conclusions: Our results suggest that the methodologies employed in the design of UMLS-Meta are not only very costly in terms of human effort, but also error-prone. The techniques presented here can be useful for both reducing human effort in the design and maintenance of UMLS-Meta and improving the quality of its contents
    • …
    corecore