1,149 research outputs found
Lightweight Ontologies
Ontologies are explicit specifications of conceptualizations. They are often thought of as directed graphs whose nodes represent concepts and whose edges represent relations between concepts. The notion of concept is understood as defined in Knowledge Representation, i.e., as a set of objects or individuals. This set is called the concept extension or the concept interpretation. Concepts are often lexically defined, i.e., they have natural language names which are used to describe the concept extensions (e.g., concept mother denotes the set of all female parents). Therefore, when ontologies are visualized, their nodes are often shown with corresponding natural language concept names. The backbone structure of the ontology graph is a taxonomy in which the relations are âis-aâ, whereas the remaining structure of the graph supplies auxiliary information about the modeled domain and may include relations like âpart-ofâ, âlocated-inâ, âis-parent-ofâ, and many others
Extending a geo-catalogue with matching capabilities
To achieve semantic interoperability, geo-spatial applications need to be equipped with tools able to understand user terminology that is typically different from the one enforced by standards. In this paper we summarize our experience in providing a semantic extension to the geo-catalogue of the Autonomous Province of Trento (PAT) in Italy. The semantic extension is based on the adoption of the S-Match semantic matching tool and on the use of a specifically designed faceted ontology codifying domain specific knowledge. We also briefly report our experience in the integration of the ontology with the geo-spatial ontology GeoWordNet
Save up to 99% of your time in mapping validation
Identifying semantic correspondences between different vocabularies has been recognized as a fundamental step towards achieving interoperability. Several manual and automatic techniques have been recently proposed. Fully manual approaches are very precise, but extremely costly. Conversely, automatic approaches tend to fail when domain specific background knowledge is needed. Consequently, they typically require a manual validation step. Yet, when the number of computed correspondences is very large, the validation phase can be very expensive. In order to reduce the problems above, we propose to compute the minimal set of correspondences, that we call the minimal mapping, which are sufficient to compute all the other ones. We show that by concentrating on such correspondences we can save up to 99% of the manual checks required for validation
Recommended from our members
Enriching videos with light semantics
This paper describes an ongoing prototypical framework to annotate and retrieve web videos with light semantics. The proposed framework reuses many existing vocabularies along with a video model. The knowledge is captured from three different information spaces (media content, context, document). We also describe ways to extract the semantic content descriptions from the existing usergenerated content using multiple approaches of linguistic processing and Named Entity Recognition, which are later identified with DBpedia resources to establish meanings for the tags. Finally, the implemented prototype is described with multiple search interfaces and retrieval processes. Evaluation on semantic enrichment shows a considerable (50% of videos) improvement in content description
A Metadata-Enabled Scientific Discourse Platform
Scientific papers and scientific conferences are still, despite the emergence of several new dissemination technologies, the de-facto standard in which scientific knowledge is consumed and discussed. While there is no shortage of services and platforms that aid this process (e.g. scholarly search engines, websites, blogs, conference management programs), a widely accepted platform used to capture and enrich the interactions of research community has yet to appear. As such, we aim to create new ways for the members and interested people working in research communities to interact; before, during and after their conferences. Furthermore, to serve as a base to these interactions, we want not only to obtain, format and manage a body of legacy and new papers related to this community but also to aggregate several useful information and services to the environment of a discourse platform
Constructing a lattice of Infectious Disease Ontologies from a Staphylococcus aureus isolate repository
A repository of clinically associated Staphylococcus aureus (Sa) isolates is used to semiâautomatically generate a set of application ontologies for specific subfamilies of Saârelated disease. Each such application ontology is compatible with the Infectious Disease Ontology (IDO) and uses resources from the Open Biomedical Ontology (OBO) Foundry. The set of application ontologies forms a lattice structure beneath the IDOâCore and IDOâextension reference ontologies. We show how this lattice can be used to define a strategy for the construction of a new taxonomy of infectious disease incorporating genetic, molecular, and clinical data. We also outline how faceted browsing and query of annotated data is supported using a lattice application ontology
Engineering ontologies: Foundations and theories from philosophy and logical theory
Ontology as a branch of philosophy is the science of what is, of the kinds and
structures of objects, properties, events, processes and relations in every area of
reality. âOntologyâ is often used by philosophers as a synonym for âmetaphysicsâ
(literally: âwhat comes after the Physicsâ), a term which was used by early students of
Aristotle to refer to what Aristotle himself called âfirst philosophyâ. The term âontologyâ (or ontologia) was itself coined in 1613, independently, by two
philosophers, Rudolf Göckel (Goclenius), in his Lexicon philosophicum and Jacob
Lorhard (Lorhardus), in his Theatrum philosophicum. The first occurrence in English
recorded by the OED appears in Baileyâs dictionary of 1721, which defines ontology
as âan Account of being in the Abstractâ
Drag it together with Groupie: making RDF data authoring easy and fun for anyone
One of the foremost challenges towards realizing a âRead-write Web of Dataâ [3] is making it possible for everyday computer users to easily find, manipulate, create, and publish data back to the Web so that it can be made available for others to use. However, many aspects of Linked Data make authoring and manipulation difficult for ânormalâ (ie non-coder) end-users. First, data can be high-dimensional, having arbitrary many properties per âinstanceâ, and interlinked to arbitrary many other instances in a many different ways. Second, collections of Linked Data tend to be vastly more heterogeneous than in typical structured databases, where instances are kept in uniform collections (e.g., database tables). Third, while highly flexible, the problem of having all structures reduced as a graph is verbosity: even simple structures can appear complex. Finally, many of the concepts involved in linked data authoring - for example, terms used to define ontologies are highly abstract and foreign to regular citizen-users.To counter this complexity we have devised a drag-and-drop direct manipulation interface that makes authoring Linked Data easy, fun, and accessible to a wide audience. Groupie allows users to author data simply by dragging blobs representing entities into other entities to compose relationships, establishing one relational link at a time. Since the underlying representation is RDF, Groupie facilitates the inclusion of references to entities and properties defined elsewhere on the Web through integration with popular Linked Data indexing services. Finally, to make it easy for new users to build upon othersâ work, Groupie provides a communal space where all data sets created by users can be shared, cloned and modified, allowing individual users to help each other model complex domains thereby leveraging collective intelligence
Building Lightweight Ontologies for Faceted Search with Named Entity Recognition: Case WarMemoirSampo
Peer reviewe
SWI-Prolog and the Web
Where Prolog is commonly seen as a component in a Web application that is
either embedded or communicates using a proprietary protocol, we propose an
architecture where Prolog communicates to other components in a Web application
using the standard HTTP protocol. By avoiding embedding in external Web servers
development and deployment become much easier. To support this architecture, in
addition to the transfer protocol, we must also support parsing, representing
and generating the key Web document types such as HTML, XML and RDF.
This paper motivates the design decisions in the libraries and extensions to
Prolog for handling Web documents and protocols. The design has been guided by
the requirement to handle large documents efficiently. The described libraries
support a wide range of Web applications ranging from HTML and XML documents to
Semantic Web RDF processing.
To appear in Theory and Practice of Logic Programming (TPLP)Comment: 31 pages, 24 figures and 2 tables. To appear in Theory and Practice
of Logic Programming (TPLP
- âŠ