323 research outputs found
Meta-tools for software development and knowledge acquisition
The effectiveness of tools that provide support for software development is highly dependent on the match between the tools and their task. Knowledge-acquisition (KA) tools constitute a class of development tools targeted at knowledge-based systems. Generally, KA tools that are custom-tailored for particular application domains are more effective than are general KA tools that cover a large class of domains. The high cost of custom-tailoring KA tools manually has encouraged researchers to develop meta-tools for KA tools. Current research issues in meta-tools for knowledge acquisition are the specification styles, or meta-views, for target KA tools used, and the relationships between the specification entered in the meta-tool and other specifications for the target program under development. We examine different types of meta-views and meta-tools. Our current project is to provide meta-tools that produce KA tools from multiple specification sources--for instance, from a task analysis of the target application
Towards Interoperability of Biomedical Ontologies
Report on Dagstuhl Seminar 07132, Schloss Dagstuhl, March 27-30 , 2007
Making Metadata More FAIR Using Large Language Models
With the global increase in experimental data artifacts, harnessing them in a
unified fashion leads to a major stumbling block - bad metadata. To bridge this
gap, this work presents a Natural Language Processing (NLP) informed
application, called FAIRMetaText, that compares metadata. Specifically,
FAIRMetaText analyzes the natural language descriptions of metadata and
provides a mathematical similarity measure between two terms. This measure can
then be utilized for analyzing varied metadata, by suggesting terms for
compliance or grouping similar terms for identification of replaceable terms.
The efficacy of the algorithm is presented qualitatively and quantitatively on
publicly available research artifacts and demonstrates large gains across
metadata related tasks through an in-depth study of a wide variety of Large
Language Models (LLMs). This software can drastically reduce the human effort
in sifting through various natural language metadata while employing several
experimental datasets on the same topic
WebProt\'eg\'e: A Cloud-Based Ontology Editor
We present WebProt\'eg\'e, a tool to develop ontologies represented in the
Web Ontology Language (OWL). WebProt\'eg\'e is a cloud-based application that
allows users to collaboratively edit OWL ontologies, and it is available for
use at https://webprotege.stanford.edu. WebProt\'ege\'e currently hosts more
than 68,000 OWL ontology projects and has over 50,000 user accounts. In this
paper, we detail the main new features of the latest version of WebProt\'eg\'e
Discovering Beaten Paths in Collaborative Ontology-Engineering Projects using Markov Chains
Biomedical taxonomies, thesauri and ontologies in the form of the
International Classification of Diseases (ICD) as a taxonomy or the National
Cancer Institute Thesaurus as an OWL-based ontology, play a critical role in
acquiring, representing and processing information about human health. With
increasing adoption and relevance, biomedical ontologies have also
significantly increased in size. For example, the 11th revision of the ICD,
which is currently under active development by the WHO contains nearly 50,000
classes representing a vast variety of different diseases and causes of death.
This evolution in terms of size was accompanied by an evolution in the way
ontologies are engineered. Because no single individual has the expertise to
develop such large-scale ontologies, ontology-engineering projects have evolved
from small-scale efforts involving just a few domain experts to large-scale
projects that require effective collaboration between dozens or even hundreds
of experts, practitioners and other stakeholders. Understanding how these
stakeholders collaborate will enable us to improve editing environments that
support such collaborations. We uncover how large ontology-engineering
projects, such as the ICD in its 11th revision, unfold by analyzing usage logs
of five different biomedical ontology-engineering projects of varying sizes and
scopes using Markov chains. We discover intriguing interaction patterns (e.g.,
which properties users subsequently change) that suggest that large
collaborative ontology-engineering projects are governed by a few general
principles that determine and drive development. From our analysis, we identify
commonalities and differences between different projects that have implications
for project managers, ontology editors, developers and contributors working on
collaborative ontology-engineering projects and tools in the biomedical domain.Comment: Published in the Journal of Biomedical Informatic
How orthogonal are the OBO Foundry ontologies?
<p>Abstract</p> <p>Background</p> <p>Ontologies in biomedicine facilitate information integration, data exchange, search and query of biomedical data, and other critical knowledge-intensive tasks. The OBO Foundry is a collaborative effort to establish a set of principles for ontology development with the eventual goal of creating a set of interoperable reference ontologies in the domain of biomedicine. One of the key requirements to achieve this goal is to ensure that ontology developers reuse term definitions that others have already created rather than create their own definitions, thereby making the ontologies orthogonal.</p> <p>Methods</p> <p>We used a simple lexical algorithm to analyze the extent to which the set of OBO Foundry candidate ontologies identified from September 2009 to September 2010 conforms to this vision. Specifically, we analyzed (1) the level of explicit term reuse in this set of ontologies, (2) the level of overlap, where two ontologies define similar terms independently, and (3) how the levels of reuse and overlap changed during the course of this year.</p> <p>Results</p> <p>We found that 30% of the ontologies reuse terms from other Foundry candidates and 96% of the candidate ontologies contain terms that overlap with terms from the other ontologies. We found that while term reuse increased among the ontologies between September 2009 and September 2010, the level of overlap among the ontologies remained relatively constant. Additionally, we analyzed the six ontologies announced as OBO Foundry members on March 5, 2010, and identified that the level of overlap was extremely low, but, notably, so was the level of term reuse.</p> <p>Conclusions</p> <p>We have created a prototype web application that allows OBO Foundry ontology developers to see which classes from their ontologies overlap with classes from other ontologies in the OBO Foundry (<url>http://obomap.bioontology.org</url>). From our analysis, we conclude that while the OBO Foundry has made significant progress toward orthogonality during the period of this study through increased adoption of explicit term reuse, a large amount of overlap remains among these ontologies. Furthermore, the characteristics of the identified overlap, such as the terms it comprises and its distribution among the ontologies, indicate that the achieving orthogonality will be exceptionally difficult, if not impossible.</p
Building a biomedical ontology recommender web service
<p>Abstract</p> <p>Background</p> <p>Researchers in biomedical informatics use ontologies and terminologies to annotate their data in order to facilitate data integration and translational discoveries. As the use of ontologies for annotation of biomedical datasets has risen, a common challenge is to identify ontologies that are best suited to annotating specific datasets. The number and variety of biomedical ontologies is large, and it is cumbersome for a researcher to figure out which ontology to use.</p> <p>Methods</p> <p>We present the <it>Biomedical Ontology Recommender web service</it>. The system uses textual metadata or a set of keywords describing a domain of interest and suggests appropriate ontologies for annotating or representing the data. The service makes a decision based on three criteria. The first one is <it>coverage</it>, or the ontologies that provide most terms covering the input text. The second is <it>connectivity</it>, or the ontologies that are most often mapped to by other ontologies. The final criterion is <it>size</it>, or the number of concepts in the ontologies. The service scores the ontologies as a function of scores of the annotations created using the National Center for Biomedical Ontology (NCBO) <it>Annotator web service</it>. We used all the ontologies from the UMLS Metathesaurus and the NCBO BioPortal.</p> <p>Results</p> <p>We compare and contrast our Recommender by an exhaustive functional comparison to previously published efforts. We evaluate and discuss the results of several recommendation heuristics in the context of three real world use cases. The best recommendations heuristics, rated ‘very relevant’ by expert evaluators, are the ones based on coverage and connectivity criteria. The Recommender service (alpha version) is available to the community and is embedded into BioPortal.</p
- …