9 research outputs found
Clinical evidence framework for Bayesian networks
There is poor uptake of prognostic decision support models by clinicians regardless of their accuracy. There is evidence that this results from doubts about the basis of the model as the evidence behind clinical models is often not clear to anyone other than their developers. In this paper, we propose a framework for representing the evidence-base of a Bayesian network (BN) decision support model. The aim of this evidence framework is to be able to present all the clinical evidence alongside the BN itself. The evidence framework is capable of presenting supporting and conflicting evidence, and evidence associated with relevant but excluded factors. It also allows the completeness of the evidence to be queried. We illustrate this framework using a BN that has been previously developed to predict acute traumatic coagulopathy, a potentially fatal disorder of blood clotting, at early stages of trauma care
A General Framework for Representing, Reasoning and Querying with Annotated Semantic Web Data
We describe a generic framework for representing and reasoning with annotated
Semantic Web data, a task becoming more important with the recent increased
amount of inconsistent and non-reliable meta-data on the web. We formalise the
annotated language, the corresponding deductive system and address the query
answering problem. Previous contributions on specific RDF annotation domains
are encompassed by our unified reasoning formalism as we show by instantiating
it on (i) temporal, (ii) fuzzy, and (iii) provenance annotations. Moreover, we
provide a generic method for combining multiple annotation domains allowing to
represent, e.g. temporally-annotated fuzzy RDF. Furthermore, we address the
development of a query language -- AnQL -- that is inspired by SPARQL,
including several features of SPARQL 1.1 (subqueries, aggregates, assignment,
solution modifiers) along with the formal definitions of their semantics
10. Interuniversitäres Doktorandenseminar Wirtschaftsinformatik Juli 2009
Begonnen im Jahr 2000, ist das Interuniversitäre Wirtschaftsinformatik-Doktorandenseminar mittlerweile zu einer schönen Tradition geworden. Zunächst unter Beteiligung der Universitäten Leipzig und Halle-Wittenberg gestartet. Seit 2003 wird das Seminar zusammen mit der Jenaer Universität durchgeführt, in diesem Jahr sind erstmals auch die Technische Universität Dresden und die TU Bergakademie Freiberg dabei. Ziel der Interuniversitären Doktorandenseminare ist der über die eigenen Institutsgrenzen hinausgehende Gedankenaustausch zu aktuellen, in Promotionsprojekten behandelten Forschungsthemen. Indem der Schwerpunkt der Vorträge auch auf das Forschungsdesign gelegt wird, bietet sich allen Doktoranden die Möglichkeit, bereits in einer frühen Phase ihrer Arbeit wichtige Hinweise und Anregungen aus einem breiten Hörerspektrum zu bekommen. In den vorliegenden Research Papers sind elf Beiträge zum diesjährigen Doktorandenseminar in Jena enthalten. Sie stecken ein weites Feld ab - vom Data Mining und Wissensmanagement über die Unterstützung von Prozessen in Unternehmen bis hin zur RFID-Technologie. Die Wirtschaftsinformatik als typische Bindestrich-Informatik hat den Ruf einer thematischen Breite. Die Dissertationsprojekte aus fünf Universitäten belegen dies eindrucksvoll.
Fusing Automatically Extracted Annotations for the Semantic Web
This research focuses on the problem of semantic data fusion. Although various solutions have been developed in the research communities focusing on databases and formal logic, the choice of an appropriate algorithm is non-trivial because the performance of each algorithm and its optimal configuration parameters depend on the type of data, to which the algorithm is applied. In order to be reusable, the fusion system must be able to select appropriate techniques and use them in combination.
Moreover, because of the varying reliability of data sources and algorithms performing fusion subtasks, uncertainty is an inherent feature of semantically annotated data and has to be taken into account by the fusion system. Finally, the issue of schema heterogeneity can have a negative impact on the fusion performance. To address these issues, we propose KnoFuss: an architecture for Semantic Web data integration based on the principles of problem-solving methods. Algorithms dealing with different fusion subtasks are represented as components of a modular architecture, and their capabilities are described formally. This allows the architecture to select appropriate methods and configure them depending on the processed data. In order to handle uncertainty, we propose a novel algorithm based on the Dempster-Shafer belief propagation. KnoFuss employs this algorithm to reason about uncertain data and method results in order to refine the fused knowledge base. Tests show that these solutions lead to improved fusion performance. Finally, we addressed the problem of data fusion in the presence of schema heterogeneity. We extended the KnoFuss framework to exploit results of automatic schema alignment tools and proposed our own schema matching algorithm aimed at facilitating data fusion in the Linked Data environment. We conducted experiments with this approach and obtained a substantial improvement in performance in comparison with public data repositories
Bayesian Networks for Evidence Based Clinical Decision Support.
PhDEvidence based medicine (EBM) is defined as the use of best available evidence for decision making, and it has been the predominant paradigm in clinical decision making for the last 20 years. EBM requires evidence from multiple sources to be combined, as published results may not be directly applicable to individual patients. For example, randomised controlled trials (RCT) often exclude patients with comorbidities, so a clinician has to combine the results of the RCT with evidence about comorbidities using his clinical knowledge of how disease, treatment and comorbidities interact with each other. Bayesian networks (BN) are well suited for assisting clinicians making evidence-based decisions as they can combine knowledge, data and other sources of evidence. The graphical structure of BN is suitable for representing knowledge about the mechanisms linking diseases, treatments and comorbidities and the strength of relations in this structure can be learned from data and published results. However, there is still a lack of techniques that systematically use knowledge, data and published results together to build BNs.
This thesis advances techniques for using knowledge, data and published results to develop and refine BNs for assisting clinical decision-making. In particular, the thesis presents four novel contributions. First, it proposes a method of combining knowledge and data to build BNs that reason in a way that is consistent with knowledge and data by allowing the BN model to include variables that cannot be measured directly. Second, it proposes techniques to build BNs that provide decision support by combining the evidence from meta-analysis of published studies with clinical knowledge and data. Third, it presents an evidence framework that supplements clinical BNs by representing the description and source of medical evidence supporting each element of a BN. Fourth, it proposes a knowledge engineering method for abstracting a BN structure by showing how each abstraction operation changes knowledge encoded in the structure. These novel techniques are illustrated by a clinical case-study in trauma-care. The aim of the case-study is to provide decision support in treatment of mangled extremities by using clinical expertise, data and published evidence about the subject. The case study is done in collaboration with the trauma unit of the Royal London Hospital
Alignment Incoherence in Ontology Matching
Ontology matching is the process of generating alignments between ontologies. An alignment is a set of correspondences. Each correspondence links concepts and properties from one ontology to concepts and properties from another ontology. Obviously, alignments are the key component to enable integration of knowledge bases described by different ontologies. For several reasons, alignments contain often erroneous correspondences. Some of these errors can result in logical conflicts with other correspondences. In such a case the alignment is referred to as an incoherent alignment.
The relevance of alignment incoherence and strategies to resolve alignment incoherence are in the center of this thesis. After an introduction to syntax and semantics of ontologies and alignments, the importance of alignment coherence is discussed from different perspectives. On the one hand, it is argued that alignment incoherence always coincides with the incorrectness of correspondences. On the other hand, it is demonstrated that the use of incoherent alignments results in severe problems for different types of applications.
The main part of this thesis is concerned with techniques for resolving alignment incoherence, i.e., how to find a coherent subset of an incoherent alignment that has to be preferred over other coherent subsets. The underlying theory is the theory of diagnosis. In particular, two specific types of diagnoses, referred to as local optimal and global optimal diagnosis, are proposed. Computing a diagnosis is for two reasons a challenge. First, it is required to use different types of reasoning techniques to determine that an alignment is incoherent and to find subsets (conflict sets) that cause the incoherence. Second, given a set of conflict sets it is a hard problem to compute a global optimal diagnosis. In this thesis several algorithms are suggested to solve these problems in an efficient way.
In the last part of this thesis, the previously developed algorithms are applied to the scenarios of
- evaluating alignments by computing their degree of incoherence;
- repairing incoherent alignments by computing different types of diagnoses;
- selecting a coherent alignment from a rich set of matching hypotheses;
- supporting the manual revision of an incoherent alignment.
In the course of discussing the experimental results, it becomes clear that it is possible to create a coherent alignment without negative impact on the alignments quality. Moreover, results show that taking alignment incoherence into account has a positive impact on the precision of the alignment and that the proposed approach can help a human to save effort in the revision process
Towards a system of concepts for Family Medicine. Multilingual indexing in General Practice/ Family Medicine in the era of Semantic Web
UNIVERSITY OF LIÈGE, BELGIUM
Executive Summary
Faculty of Medicine
Département Universitaire de Médecine Générale.
Unité de recherche Soins Primaires et Santé
Doctor in biomedical sciences
Towards a system of concepts for Family Medicine.
Multilingual indexing in General Practice/ Family Medicine in the era
of SemanticWeb
by Dr. Marc JAMOULLE
Introduction
This thesis is about giving visibility to the often overlooked work of family
physicians and consequently, is about grey literature in General Practice
and Family Medicine (GP/FM). It often seems that conference organizers
do not think of GP/FM as a knowledge-producing discipline that deserves
active dissemination. A conference is organized, but not much is done with
the knowledge shared at these meetings. In turn, the knowledge cannot be
reused or reapplied. This these is also about indexing. To find knowledge
back, indexing is mandatory. We must prepare tools that will automatically
index the thousands of abstracts that family doctors produce each year in
various languages. And finally this work is about semantics1. It is an introduction
to health terminologies, ontologies, semantic data, and linked
open data. All are expressions of the next step: Semantic Web for health
care data. Concepts, units of thought expressed by terms, will be our target
and must have the ability to be expressed in multiple languages. In turn,
three areas of knowledge are at stake in this study: (i) Family Medicine as a
pillar of primary health care, (ii) computational linguistics, and (iii) health
information systems.
Aim
• To identify knowledge produced by General practitioners (GPs) by
improving annotation of grey literature in Primary Health Care
• To propose an experimental indexing system, acting as draft for a
standardized table of content of GP/GM
• To improve the searchability of repositories for grey literature in GP/GM.
1For specific terms, see the Glossary page 257
x
Methods
The first step aimed to design the taxonomy by identifying relevant concepts
in a compiled corpus of GP/FM texts. We have studied the concepts
identified in nearly two thousand communications of GPs during
conferences. The relevant concepts belong to the fields that are focusing
on GP/FM activities (e.g. teaching, ethics, management or environmental
hazard issues).
The second step was the development of an on-line, multilingual, terminological
resource for each category of the resulting taxonomy, named
Q-Codes. We have designed this terminology in the form of a lightweight
ontology, accessible on-line for readers and ready for use by computers of
the semantic web. It is also fit for the Linked Open Data universe.
Results
We propose 182 Q-Codes in an on-line multilingual database (10 languages)
(www.hetop.eu/Q) acting each as a filter for Medline. Q-Codes are also available
under the form of Unique Resource Identifiers (URIs) and are exportable
in Web Ontology Language (OWL). The International Classification of Primary
Care (ICPC) is linked to Q-Codes in order to form the Core Content
Classification in General Practice/Family Medicine (3CGP). So far, 3CGP is
in use by humans in pedagogy, in bibliographic studies, in indexing congresses,
master theses and other forms of grey literature in GP/FM. Use by
computers is experimented in automatic classifiers, annotators and natural
language processing.
Discussion
To the best of our knowledge, this is the first attempt to expand the ICPC
coding system with an extension for family physician contextual issues,
thus covering non-clinical content of practice. It remains to be proven that
our proposed terminology will help in dealing with more complex systems,
such as MeSH, to support information storage and retrieval activities.
However, this exercise is proposed as a first step in the creation of an ontology
of GP/FM and as an opening to the complex world of Semantic Web
technologies.
Conclusion
We expect that the creation of this terminological resource for indexing abstracts
and for facilitating Medline searches for general practitioners, researchers
and students in medicine will reduce loss of knowledge in the
domain of GP/FM. In addition, through better indexing of the grey literature
(congress abstracts, master’s and doctoral theses), we hope to enhance
the accessibility of research results and give visibility to the invisible work
of family physicians