4,503 research outputs found
Mistakes in medical ontologies: Where do they come from and how can they be detected?
We present the details of a methodology for quality assurance in large medical terminologies and describe three algorithms that can help terminology developers and users to identify potential mistakes. The methodology is based in part on linguistic criteria and in part on logical and ontological principles governing sound classifications. We conclude by outlining the results of applying the methodology in the form of a taxonomy different types of errors and potential errors detected in SNOMED-CT
Evaluating the semantic web: a task-based approach
The increased availability of online knowledge has led to the design of several algorithms that solve a variety of tasks by harvesting the Semantic Web, i.e. by dynamically selecting and exploring a multitude of online ontologies. Our hypothesis is that the performance of such novel algorithms implicity provides an insight into the quality of the used ontologies and thus opens the way to a task-based evaluation of the Semantic Web. We have investigated this hypothesis by studying the lessons learnt about online ontologies when used to solve three tasks: ontology matching, folksonomy enrichment, and word sense disambiguation. Our analysis leads to a suit of conclusions about the status of the Semantic Web, which highlight a number of strengths and weaknesses of the semantic information available online and complement the findings of other analysis of the Semantic Web landscape
Recommended from our members
Document generality: its computation for ranking
The increased variety of information makes it critical to retrieve documents which are not only relevant but also broad enough to cover as many different aspects of a certain topic as possible. The increased variety of users also makes it critical to retrieve documents that are jargon free and easy-to-understand rather than the specific technical materials. In this paper, we propose a new concept namely document generality computation. Generality of document is of fundamental importance to information retrieval. Document generality is the state or quality of docu- ment being general. We compute document general- ity based on a domain-ontology method that analyzes scope and semantic cohesion of concepts appeared in the text. For test purposes, our proposed approach is then applied to improving the performance of doc- ument ranking in bio-medical information retrieval. The retrieved documents are re-ranked by a combined score of similarity and the closeness of documentsā generality to that of a query. The experiments have shown that our method can work on a large scale bio-medical text corpus OHSUMED (Hersh, Buckley, Leone & Hickam 1994), which is a subset of MEDLINE collection containing of 348,566 medical journal references and 101 test queries, with an encouraging performance
SNOMED CT standard ontology based on the ontology for general medical science
Background: Systematized Nomenclature of MedicineāClinical Terms (SNOMED CT, hereafter abbreviated SCT) is acomprehensive medical terminology used for standardizing the storage, retrieval, and exchange of electronic healthdata. Some efforts have been made to capture the contents of SCT as Web Ontology Language (OWL), but theseefforts have been hampered by the size and complexity of SCT.
Method: Our proposal here is to develop an upper-level ontology and to use it as the basis for defining the termsin SCT in a way that will support quality assurance of SCT, for example, by allowing consistency checks ofdefinitions and the identification and elimination of redundancies in the SCT vocabulary. Our proposed upper-levelSCT ontology (SCTO) is based on the Ontology for General Medical Science (OGMS).
Results: The SCTO is implemented in OWL 2, to support automatic inference and consistency checking. Theapproach will allow integration of SCT data with data annotated using Open Biomedical Ontologies (OBO) Foundryontologies, since the use of OGMS will ensure consistency with the Basic Formal Ontology, which is the top-levelontology of the OBO Foundry. Currently, the SCTO contains 304 classes, 28 properties, 2400 axioms, and 1555annotations. It is publicly available through the bioportal athttp://bioportal.bioontology.org/ontologies/SCTO/.
Conclusion: The resulting ontology can enhance the semantics of clinical decision support systems and semanticinteroperability among distributed electronic health records. In addition, the populated ontology can be used forthe automation of mobile health applications
Integration of the DOLCE top-level ontology into the OntoSpec methodology
This report describes a new version of the OntoSpec methodology for ontology
building. Defined by the LaRIA Knowledge Engineering Team (University of
Picardie Jules Verne, Amiens, France), OntoSpec aims at helping builders to
model ontological knowledge (upstream of formal representation). The
methodology relies on a set of rigorously-defined modelling primitives and
principles. Its application leads to the elaboration of a semi-informal
ontology, which is independent of knowledge representation languages. We
recently enriched the OntoSpec methodology by endowing it with a new resource,
the DOLCE top-level ontology defined at the LOA (IST-CNR, Trento, Italy). The
goal of this integration is to provide modellers with additional help in
structuring application ontologies, while maintaining independence
vis-\`{a}-vis formal representation languages. In this report, we first provide
an overview of the OntoSpec methodology's general principles and then describe
the DOLCE re-engineering process. A complete version of DOLCE-OS (i.e. a
specification of DOLCE in the semi-informal OntoSpec language) is presented in
an appendix
Recommended from our members
Using background knowledge for ontology evolution
One of the current bottlenecks for automating ontology evolution is resolving the right links between newly arising information and the existing knowledge in the ontology. Most of existing approaches mainly rely on the user when it comes to capturing and representing new knowledge. Our ontology evolution framework intends to reduce or even eliminate user input through the use of background knowledge. In this paper, we show how various sources of background knowledge could be exploited for relation discovery. We perform a relation discovery experiment focusing on the use of WordNet and Semantic Web ontologies as sources of background knowledge. We back our experiment with a thorough analysis that highlights various issues on how to improve and validate relation discovery in the future, which will directly improve the task of automatically performing ontology changes during evolution
OWL-POLAR : A Framework for Semantic Policy Representation and Reasoning
Peer reviewedPreprin
Who Cares about Axiomatization? Representation, Invariance, and Formal Ontologies
The philosophy of science of Patrick Suppes is centered on two important notions that are
part of the title of his recent book (Suppes 2002): Representation and Invariance.
Representation is important because when we embrace a theory we implicitly choose a way to
represent the phenomenon we are studying. Invariance is important because, since invariants
are the only things that are constant in a theory, in a way they give the āobjectiveā meaning of
that theory.
Every scientific theory gives a representation of a class of structures and studies the invariant
properties holding in that class of structures. In Suppesā view, the best way to define this class
of structures is via axiomatization. This is because a class of structures is given by a
definition, and this same definition establishes which are the properties that a single structure
must possess in order to belong to the class. These properties correspond to the axioms of a
logical theory.
In Suppesā view, the best way to characterize a scientific structure is by giving a
representation theorem for its models and singling out the invariants in the structure.
Thus, we can say that the philosophy of science of Patrick Suppes consists in the application
of the axiomatic method to scientific disciplines.
What I want to argue in this paper is that this application of the axiomatic method is also at
the basis of a new approach that is being increasingly applied to the study of computer
science and information systems, namely the approach of formal ontologies.
The main task of an ontology is that of making explicit the conceptual structure underlying a
certain domain. By āmaking explicit the conceptual structureā we mean singling out the most
basic entities populating the domain and writing axioms expressing the main properties of
these primitives and the relations holding among them.
So, in both cases, the axiomatization is the main tool used to characterize the object of
inquiry, being this object scientific theories (in Suppesā approach), or information systems
(for formal ontologies).
In the following section I will present the view of Patrick Suppes on the philosophy of science
and the axiomatic method, in section 3 I will survey the theoretical issues underlying the work
that is being done in formal ontologies and in section 4 I will draw a comparison of these two
approaches and explore similarities and differences between them
- ā¦