3,721 research outputs found
Recommended from our members
Automating class definitions from OWL to English
Text definitions for entities within bio-ontologies are a cor-nerstone of the effort to gain a consensus in understanding and usage of those ontologies. Writing these definitions is, however, a considerable effort and there is often a lag be-tween specification of the entities in the ontology and the development of the text-based definitions. As well as these text definitions, there can also be logical descriptions and definitions of an ontology's entities. The goal of natural lan-guage generation (NLG) from ontologies is to take the logi-cal description of entities and generate fluent natural lan-guage. We should be able to use NLG to automatically pro-vide text-based definitions from an ontology that has logical descriptions of its entities and thus avoid the bottleneck of authoring these definitions by hand. In this paper we present some early work in using NLG to provide such text definitions for the Experimental factor Ontology (EFO). We present our results, discuss issues in generating text definitions, and highlight some future work
Automating generation of textual class definitions from OWL to English
Text definitions for entities within bio-ontologies are a cornerstone of the effort to gain a consensus in understanding and usage of those ontologies.
Writing these definitions is, however, a considerable effort and there is often a lag between specification of the main part of an ontology (logical descriptions and
definitions of entities) and the development of the text-based definitions. The goal of natural language generation (NLG) from ontologies is to take the logical description of entities and generate fluent natural language. The application described here uses NLG to automatically provide text-based definitions from an ontology that has logical descriptions of its entities, so avoiding the bottleneck of authoring these definitions by hand
Style Guidelines for Naming and Labeling Ontologies in the Multilingual Web
In the context of the Semantic Web, natural language descriptions associated with ontologies have proven to be of major importance not only to support ontology developers and adopters, but also to assist in tasks such as ontology mapping, information extraction, or natural language generation. In the state-of-the-art we find some attempts to provide guidelines for URI local names in English, and also some disagreement on the use of URIs for describing ontology elements. When trying to extrapolate these ideas to a multilingual scenario, some of these approaches fail to provide a valid solution. On the basis of some real experiences in the translation of ontologies from English into Spanish, we provide a preliminary set of guidelines for naming and labeling ontologies in a multilingual scenario
Ontologies on the semantic web
As an informational technology, the World Wide Web has enjoyed spectacular success. In just ten years it has transformed the way information is produced, stored, and shared in arenas as diverse as shopping, family photo albums, and high-level academic research. The âSemantic Webâ was touted by its developers as equally revolutionary but has not yet achieved anything like the Webâs exponential uptake. This 17 000 word survey article explores why this might be so, from a perspective that bridges both philosophy and IT
RegenBase: a knowledge base of spinal cord injury biology for translational research.
Spinal cord injury (SCI) research is a data-rich field that aims to identify the biological mechanisms resulting in loss of function and mobility after SCI, as well as develop therapies that promote recovery after injury. SCI experimental methods, data and domain knowledge are locked in the largely unstructured text of scientific publications, making large scale integration with existing bioinformatics resources and subsequent analysis infeasible. The lack of standard reporting for experiment variables and results also makes experiment replicability a significant challenge. To address these challenges, we have developed RegenBase, a knowledge base of SCI biology. RegenBase integrates curated literature-sourced facts and experimental details, raw assay data profiling the effect of compounds on enzyme activity and cell growth, and structured SCI domain knowledge in the form of the first ontology for SCI, using Semantic Web representation languages and frameworks. RegenBase uses consistent identifier schemes and data representations that enable automated linking among RegenBase statements and also to other biological databases and electronic resources. By querying RegenBase, we have identified novel biological hypotheses linking the effects of perturbagens to observed behavioral outcomes after SCI. RegenBase is publicly available for browsing, querying and download.Database URL:http://regenbase.org
Recommended from our members
Generating Natural Language Explanations For Entailments In Ontologies
Building an error-free and high-quality ontology in OWL (Web Ontology Language)---the latest standard ontology language endorsed by the World Wide Web Consortium---is not an easy task for domain experts, who usually have limited knowledge of OWL and logic. One sign of an erroneous ontology is the occurrence of undesired inferences (or entailments), often caused by interactions among (apparently innocuous) axioms within the ontology. This suggests the need for a tool that allows developers to inspect why such an entailment follows from the ontology in order to debug and repair it.
This thesis aims to address the above problem by advancing knowledge and techniques in generating explanations for entailments in OWL ontologies. We build on earlier work on identifying minimal subsets of the ontology from which an entailment can be drawn---known technically as justifications. Our main focus is on planning (at a logical level) an explanation that links a justification (premises) to its entailment (conclusion); we also consider how best to express the explanation in English. Among other innovations, we propose a method for assessing the understandability of explanations, so that the easiest can be selected from a set of alternatives.
Our findings make a theoretical contribution to Natural Language Generation and Knowledge Representation. They could also play a practical role in improving the explanation facilities in ontology development tools, considering especially the requirements of users who are not expert in OWL
Semantic-Based Process Mining Technique for Annotation and Modelling of Domain Processes
Semantic technologies aim to represent information or models in formatsthat are not just machine-readable but also machine-understandable. To this effect, thispaper shows how the semantic concepts can be layered on top of the derived models toprovide a more contextual analysis of the models through the conceptualization method.Technically, the method involves augmentation of informative value of the resulting mod-els by semantically annotating the process elements with concepts that they represent inreal-time settings, and then linking them to an ontology in order to allow for a moreabstract analysis of the extracted logs or models. The work illustrates the method usingthe case study of a learning process domain. Consequently, the results show that a systemwhich is formally encoded with semantic labelling (annotation), semantic representation(ontology) and semantic reasoning (reasoner) has the capacity to lift the process miningand analysis from the syntactic to a more conceptual level
Automatic generation of natural language service descriptions from OWL-S service descriptions
As the web grows in both size and diversity, there is an increased need to automate aspects of its use such as service coordination (e.g., discovery, composition and execution). Semantic web services combine semantic web and web service
technologies, providing the support for automatic service coordination. Semantic web services are described using semantic languages (e.g., OWL-S) and can be automatically processed by intelligent agents (agent based coordination).
This dissertation aims at enhancing the service coordination process, building upon well-understood and widespread practices on natural language generation.
Automated service coordination relies on the existence of formal service descriptions (semantic languages, such as OWL-S or WSML). The use of web services by people is essentially associated with the discovery, composition and execution of services that match their needs. According to the personâs will, the discovered or composed service is or is not executed. This decision can only be made if the person understands the description of the service. Therefore, it is necessary that formal descriptions be converted into more natural descriptions, adequate to human comprehension.
This dissertation contributes to empower the users (knowledge engineers and common citizens) of service coordination systems with the capability to better understand and decide about discovered or composed services without the need of understanding the formal language in which the semantic web service is described. We implemented a software program capable of generating natural language service descriptions from OWL-S description. It is a template-based natural language generation system that receives the OWL-S description of a service as input and converts it into an English description.
This system will leverage the use of service coordination technology by people and allow them to have a more active role in the various stages of the service coordination process
Ontology learning for the semantic deep web
Ontologies could play an important role in assisting users in their search for Web pages. This dissertation considers the problem of constructing natural ontologies that support users in their Web search efforts and increase the number of relevant Web pages that are returned. To achieve this goal, this thesis suggests combining the Deep Web information, which consists of dynamically generated Web pages and cannot be indexed by the existing automated Web crawlers, with ontologies, resulting in the Semantic Deep Web. The Deep Web information is exploited in three different ways: extracting attributes from the Deep Web data sources automatically, generating domain ontologies from the Deep Web automatically, and extracting instances from the Deep Web to enhance the domain ontologies. Several algorithms for the above mentioned tasks are presented. Lxperimeiital results suggest that the proposed methods assist users with finding more relevant Web sites. Another contribution of this dissertation includes developing a methodology to evaluate existing general purpose ontologies using the Web as a corpus. The quality of ontologies (QoO) is quantified by analyzing existing ontologies to get numeric measures of how natural their concepts and their relationships are. This methodology was first applied to several major, popular ontologies, such as WordNet, OpenCyc and the UMLS. Subsequently the domain ontologies developed in this research were evaluated from the naturalness perspective
- âŠ