16,117 research outputs found
Report on the EHCR (Deliverable 26.2)
This deliverable is the second for Workpackage 26. The first, submitted after
Month 12, summarised the areas of research that the partners had identified as
being relevant to the semantic indexing of the EHR. This second one reports
progress on the key threads of work identified by the partners during the project to
contribute towards semantically interoperable and processable EHRs.
This report provides a set of short summaries on key topics that have emerged as
important, and to which the partners are able to make strong contributions. Some of
these are also being extended via two new EU Framework 6 proposals that include
WP26 partners: this is also a measure of the success of this Network of Excellence
Continuous Improvement Through Knowledge-Guided Analysis in Experience Feedback
Continuous improvement in industrial processes is increasingly a key element of competitiveness for industrial systems. The management of experience feedback in this framework is designed to build, analyze and facilitate the knowledge sharing among problem solving practitioners of an organization in order to improve processes and products achievement. During Problem Solving Processes, the intellectual investment of experts is often considerable and the opportunities for expert knowledge exploitation are numerous: decision making, problem solving under uncertainty, and expert configuration. In this paper, our contribution relates to the structuring of a cognitive experience feedback framework, which allows a flexible exploitation of expert knowledge during Problem Solving Processes and a reuse such collected experience. To that purpose, the proposed approach uses the general principles of root cause analysis for identifying the root causes of problems or events, the conceptual graphs formalism for the semantic conceptualization of the domain vocabulary and the Transferable Belief Model for the fusion of information from different sources. The underlying formal reasoning mechanisms (logic-based semantics) in conceptual graphs enable intelligent information retrieval for the effective exploitation of lessons learned from past projects. An example will illustrate the application of the proposed approach of experience feedback processes formalization in the transport industry sector
Recommended from our members
Proceedings ICPW'07: 2nd International Conference on the Pragmatic Web, 22-23 Oct. 2007, Tilburg: NL
Proceedings ICPW'07: 2nd International Conference on the Pragmatic Web, 22-23 Oct. 2007, Tilburg: N
The Foundational Model of Anatomy Ontology
Anatomy is the structure of biological organisms. The term also denotes the scientific
discipline devoted to the study of anatomical entities and the structural and
developmental relations that obtain among these entities during the lifespan of an
organism. Anatomical entities are the independent continuants of biomedical reality on
which physiological and disease processes depend, and which, in response to etiological
agents, can transform themselves into pathological entities. For these reasons, hard copy
and in silico information resources in virtually all fields of biology and medicine, as a
rule, make extensive reference to anatomical entities. Because of the lack of a
generalizable, computable representation of anatomy, developers of computable
terminologies and ontologies in clinical medicine and biomedical research represented
anatomy from their own more or less divergent viewpoints. The resulting heterogeneity
presents a formidable impediment to correlating human anatomy not only across
computational resources but also with the anatomy of model organisms used in
biomedical experimentation. The Foundational Model of Anatomy (FMA) is being
developed to fill the need for a generalizable anatomy ontology, which can be used and
adapted by any computer-based application that requires anatomical information.
Moreover it is evolving into a standard reference for divergent views of anatomy and a
template for representing the anatomy of animals. A distinction is made between the FMA
ontology as a theory of anatomy and the implementation of this theory as the FMA
artifact. In either sense of the term, the FMA is a spatial-structural ontology of the
entities and relations which together form the phenotypic structure of the human
organism at all biologically salient levels of granularity. Making use of explicit
ontological principles and sound methods, it is designed to be understandable by human
beings and navigable by computers. The FMA’s ontological structure provides for
machine-based inference, enabling powerful computational tools of the future to reason
with biomedical data
Medical WordNet: A new methodology for the construction and validation of information resources for consumer health
A consumer health information system must be able to comprehend both expert and non-expert medical vocabulary and to map between the two. We describe an ongoing
project to create a new lexical database called Medical WordNet (MWN), consisting of
medically relevant terms used by and intelligible to non-expert subjects and supplemented by a corpus of natural-language sentences that is designed to provide
medically validated contexts for MWN terms. The corpus derives primarily from online health information sources targeted to consumers, and involves two sub-corpora, called Medical FactNet (MFN) and Medical BeliefNet (MBN), respectively. The former consists of statements accredited as true on the basis of a rigorous process of validation, the latter of statements which non-experts believe to be true. We summarize the MWN / MFN / MBN project, and describe some of its applications
Developing a European grid infrastructure for cancer research: vision, architecture and services
Life sciences are currently at the centre of an information revolution. The nature and amount of information now available opens up areas of research that were once in the realm of science fiction. During this information revolution, the data-gathering capabilities have greatly surpassed the data-analysis techniques. Data integration across heterogeneous data sources and data aggregation across different aspects of the biomedical spectrum, therefore, is at the centre of current biomedical and pharmaceutical R&D
Knowledge-based modelling applied to synucleinopathies
The adoption of telemedicine technologies has enabled collaborative programs involving a variety of links among distributed medical structures and health officials and professionals. The use for telemedicine for transmission of medical data and the possibility for several distant physicians to share their knowledge on given medical cases provides clear benefits, but also raises several unsolved conceptual and technical challenges. The seamless exchange and access of medical information between medical structures, health professionals, and patients is a prerequisite for the harmonious development of this new medical practice. This paper proposes a new approach of semantic interoperability for enabling mutual understanding of terminologies and concepts used. The proposed semantic interoperability approach is based on conceptual graph to support collaborative activities by describing how different health specialists can apply appropriate strategies to eliminate differential medical diagnosis. Intelligent analysis strategies are used to narrow down and pinpoint medical disorders. The model proposed is fully verified by a case study in the context of elderly patients and specifically dealing with synucleinopathies, a group of neurodegenerative diseases that include Parkinson's disease (PD), dementia with Lewy bodies (DLB), pure autonomic failure (PAF) and multiple system atrophy (MSA)
Development and implementation of clinical guidelines : an artificial intelligence perspective
Clinical practice guidelines in paper format are still the preferred form of delivery of medical knowledge and recommendations to healthcare professionals. Their current support and development process have well identified limitations to which the healthcare community has been continuously searching solutions. Artificial intelligence may create the conditions and provide the tools to address many, if not all, of these limitations.. This paper presents a comprehensive and up to date review of computer-interpretable guideline approaches, namely Arden Syntax, GLIF, PROforma, Asbru, GLARE and SAGE. It also provides an assessment of how well these approaches respond to the challenges posed by paper-based guidelines and addresses topics of Artificial intelligence that could provide a solution to the shortcomings of clinical guidelines. Among the topics addressed by this paper are expert systems, case-based reasoning, medical ontologies and reasoning under uncertainty, with a special focus on methodologies for assessing quality of information when managing incomplete information. Finally, an analysis is made of the fundamental requirements of a guideline model and the importance that standard terminologies and models for clinical data have in the semantic and syntactic interoperability between a guideline execution engine and the software tools used in clinical settings. It is also proposed a line of research that includes the development of an ontology for clinical practice guidelines and a decision model for a guideline-based expert system that manages non-compliance with clinical guidelines and uncertainty.This work is funded by national funds through the FCT – Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) within project PEst-OE/EEI/UI0752/2011"
Validating archetypes for the Multiple Sclerosis Functional Composite
Background Numerous information models for electronic health records, such as
openEHR archetypes are available. The quality of such clinical models is
important to guarantee standardised semantics and to facilitate their
interoperability. However, validation aspects are not regarded sufficiently
yet. The objective of this report is to investigate the feasibility of
archetype development and its community-based validation process, presuming
that this review process is a practical way to ensure high-quality information
models amending the formal reference model definitions. Methods A standard
archetype development approach was applied on a case set of three clinical
tests for multiple sclerosis assessment: After an analysis of the tests, the
obtained data elements were organised and structured. The appropriate
archetype class was selected and the data elements were implemented in an
iterative refinement process. Clinical and information modelling experts
validated the models in a structured review process. Results Four new
archetypes were developed and publicly deployed in the openEHR Clinical
Knowledge Manager, an online platform provided by the openEHR Foundation.
Afterwards, these four archetypes were validated by domain experts in a team
review. The review was a formalised process, organised in the Clinical
Knowledge Manager. Both, development and review process turned out to be time-
consuming tasks, mostly due to difficult selection processes between
alternative modelling approaches. The archetype review was a straightforward
team process with the goal to validate archetypes pragmatically. Conclusions
The quality of medical information models is crucial to guarantee standardised
semantic representation in order to improve interoperability. The validation
process is a practical way to better harmonise models that diverge due to
necessary flexibility left open by the underlying formal reference model
definitions. This case study provides evidence that both community- and tool-
enabled review processes, structured in the Clinical Knowledge Manager, ensure
archetype quality. It offers a pragmatic but feasible way to reduce variation
in the representation of clinical information models towards a more unified
and interoperable model
- …