104,717 research outputs found
The Semantic Web: Apotheosis of annotation, but what are its semantics?
This article discusses what kind of entity the proposed Semantic Web (SW) is, principally by reference to the relationship of natural language structure to knowledge representation (KR). There are three distinct views on this issue. The first is that the SW is basically a renaming of the traditional AI KR task, with all its problems and challenges. The second view is that the SW will be, at a minimum, the World Wide Web with its constituent documents annotated so as to yield their content, or meaning structure, more directly. This view makes natural language processing central as the procedural bridge from texts to KR, usually via some form of automated information extraction. The third view is that the SW is about trusted databases as the foundation of a system of Web processes and services. There's also a fourth view, which is much more difficult to define and discuss: If the SW just keeps moving as an engineering development and is lucky, then real problems won't arise. This article is part of a special issue called Semantic Web Update
Learning Services Based on Formal Concept Reasoning
A formal foundation of automated service discovering for Semantic Web is proposed. The approach is based on the
formalization of the problem using an agent oriented programming language (ConGolog), as well as on the use of the
Formal Concept Analysis as a tool for knowledge extraction.Ministerio de EducaciĂłn y Ciencia TIN 2004- 0388
An Ontology and Semantic Web Service for Quantum Chemistry Calculations.
The purpose of this article is to present an ontology, termed OntoCompChem, for quantum chemistry calculations as performed by the Gaussian quantum chemistry software, as well as a semantic web service named MolHub. The OntoCompChem ontology has been developed based on the semantics of concepts specified in the CompChem convention of Chemical Markup Language (CML) and by extending the Gainesville Core (GNVC) ontology. MolHub is developed in order to establish semantic interoperability between different tools used in quantum chemistry and thermochemistry calculations, and as such is integrated into the J-Park Simulator (JPS)-a multidomain interactive simulation platform and expert system. It uses the OntoCompChem ontology and implements a formal language based on propositional logic as a part of its query engine, which verifies satisfiability through reasoning. This paper also presents a NASA polynomial use-case scenario to demonstrate semantic interoperability between Gaussian and a tool for thermodynamic data calculations within MolHub.This project is supported by the National Research Foundation (NRF), Prime Minister’s Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme, and by the Alexander von Humboldt foundation
Gasping for AIR Why we need Linked Rules and Justifications on the Semantic Web
The Semantic Web is a distributed model for publishing, utilizing and extending structured information using Web protocols. One of the main goals of this technology is to automate the retrieval and integration of data and to enable the inference of interesting results. This automation requires logics and rule languages that make inferences, choose courses of action, and answer questions. The openness of the Web, however, leads to several issues including the handling of inconsistencies, integration of diverse information, and the determination of the quality and trustworthiness of the data. AIR is a Semantic Web-based rule language that provides this functionality while focusing on generating and tracking explanations for its inferences and actions as well as conforming to Linked Data principles. AIR supports Linked Rules, which allow rules to be combined, re-used and extended in a manner similar to Linked Data. Additionally, AIR explanations themselves are Semantic Web data so they can be used for further reasoning. In this paper we present an overview of AIR, discuss its potential as a Web rule language by providing examples of how its features can be leveraged for different inference requirements, and describe how justifications are represented and generated.This material is based upon work supported by the National Science Foundation under Award No. CNS-0831442, by the Air Force Office of Scientific Research under Award No. FA9550-09-1-0152, and by Intelligence Advanced Research Projects Activity under Award No. FA8750-07-2-0031
Application of universal ontology of geographic space in a subset of the first-order predicate calculus\ud
Spatial data sources, like the geodetic reference system,\ud
administrative spatial units, addresses and topographic\ud
maps, serve as a base for geo-referencing to the most of\ud
dependant thematic spatial databases. The marketing\ud
strategy of the surveying profession towards the users of\ud
spatial data infrastructure should be in the design of an\ud
integrative semantic reference system to be used within\ud
the Semantic Web, or so-called Web 3.0. The main\ud
motivation for our research was the representation of\ud
possibilities to automate tool development for efficient\ud
and more sensible approaches to query information\ud
within web-published spatial data. In contemporary\ud
research there are several solutions offered as upgrades\ud
of basic GIS systems with the knowledge presented in\ud
the form of ontologies. Therefore, we are faced with\ud
the new generation of GIS technology, which has been\ud
named "inteligent GIS". In this article, we present\ud
method of modelling the semantic reference system\ud
as an application of the ontology of geographic space\ud
in the subset of first order predicate calculus. Such\ud
a semantic network of geographic space represents\ud
the foundation for semantic data analyses and data\ud
integration in distributed information systems. Our\ud
application is based on the methods of machine\ud
learning and use of the Prolog programming language
A Mediation Framework for Web Services in a Distributed Healthcare Information System
Conceptualizing distributed Healthcare Information System is an important step toward the enhancement of clinical decision support system. In this paper, we propose a semantic mediation of Web Services interfaces for distributed healthcare system. Our proposal is an approach based on Web Services technology and their mediation in a Peer to Peer environment. This approach constitutes the foundation for the set-up of a mediation framework built around the JXTA P2P architecture applied to cardiology domain in collaboration with the National Institute of Health and Medical Research (INSER ERM 107). To achieve our goal, we used the OWL-S language as a means of describing semantics of Web Services interfaces, and the JXTA distributed architecture. © 2005 IEEE
Resource description framework triples entity formations using statistical language model
A method in formatting unstructured sentences from the source corpus to a specificknowledge representation such as RDF is needed. A method for RDF entity formations from aparagraph of text using statistical language model based on N-gram is introduced. Theimplementation of RDF entity formation is applied on natural language query for informationretrieval of the Islamic knowledge. 300 concepts from the English translation of Holy Quranwith 350 relationships are used as a knowledge base. We evaluate our approach on collectionof queries from the Islamic Research Foundation website with a total, 82 queries and comparethe performance against previous method used in FREyA. The result shown the proposedmethod improved 17.07% on the accuracy of the natural language formulation analysis, whichtested on search strategy. It shows the increment on recall and precision with 7% and 3%.Keywords: semantic web; N-gram; ontology; statistical mode
Sense and reference on the web
This thesis builds a foundation for the philosophy of theWeb by examining the crucial
question: What does a Uniform Resource Identifier (URI) mean? Does it have a sense,
and can it refer to things? A philosophical and historical introduction to the Web explains
the primary purpose of theWeb as a universal information space for naming and
accessing information via URIs. A terminology, based on distinctions in philosophy, is
employed to define precisely what is meant by information, language, representation,
and reference. These terms are then employed to create a foundational ontology and
principles ofWeb architecture. From this perspective, the SemanticWeb is then viewed
as the application of the principles of Web architecture to knowledge representation.
However, the classical philosophical problems of sense and reference that have been
the source of debate within the philosophy of language return. Three main positions are
inspected: the logicist position, as exemplified by the descriptivist theory of reference
and the first-generation SemanticWeb, the direct reference position, as exemplified by
Putnamand Kripke’s causal theory of reference and the second-generation Linked Data
initiative, and a Wittgensteinian position that views the Semantic Web as yet another
public language. After identifying the public language position as the most promising,
a solution of using people’s everyday use of search engines as relevance feedback is
proposed as a Wittgensteinian way to determine sense of URIs. This solution is then
evaluated on a sample of the Semantic Web discovered by via using queries from a
hypertext search engine query log. The results are evaluated and the technique of using
relevance feedback from hypertext Web searches to determine relevant Semantic
Web URIs in response to user queries is shown to considerably improve baseline performance.
Future work for the Web that follows from our argument and experiments
is detailed, and outlines of a future philosophy of the Web laid out
Easing the questioning of semantic biomedical data
Researchers have been using semantic technologies
as essential tools to structure knowledge. This is particularly
relevant in the biomedical domain, where large dataset are
continuously generated. Semantic technologies offer the ability
to describe data and to map and linking distributed repositories,
creating a network where the searching interface is a single entry
point. However, the increasing number of semantic data repositories
that are publicly available is creating new challenges related
to its exploration. Despite being human and machine-readable,
these technologies are much more challenging for end-users.
Querying services usually require mastering formal languages
and that knowledge is beyond the typical user’s expertise, being
a critical issue in adopting semantic web information systems. In
particular, the questioning of biomedical data presents specific
challenges for which there are still no mature proposals for
production environments. This paper presents a solution to
query biomedical semantic databases using natural language. The
system is at the intersection between semantic parsing and the
use of templates. It makes it possible to extract information in a
friendly way for users who are not experts in semantic queries.FCT - Portuguese Foundation for Science and Technology
supports Arnaldo Pereira (Ph.D. Grant PD/BD/142877/2018).info:eu-repo/semantics/publishedVersio
BiTRDF: Extending RDF for BiTemporal Data
The Internet is not only a platform for communication, transactions, and cloud storage, but it is also a large knowledge store where people as well as machines can create, manipulate, infer, and make use of data and knowledge. The Semantic Web was developed for this purpose. It aims to help machines understand the meaning of data and knowledge so that machines can use the data and knowledge in decision making. The Resource Description Framework (RDF) forms the foundation of the Semantic Web which is organized as the Semantic Web Layer Cake. RDF is limited and can only express a binary relationship with the format of . However, expressing higher order relationships requires reification which is very cumbersome. Naturally, time varying data is very common and cannot be represented by only binary relationships. We first surveyed approaches that use reification or extend RDF for higher order relationships. Then we proposed a new data model, BiTemporal RDF (BiTRDF), that incorporates both valid time and transaction time explicitly into standard RDF resources. We defined the BiTRDF model with its elements, vocabulary, semantics, and entailment, and the BiTemporal SPARQL (BiT-SPARQL) query language. We discussed the foundation for implementing BiTRDF and we also explored different approaches to implement the BiTRDF model. We concluded this thesis with potential research directions. This thesis lays the foundation for a new approach to easily embed any or more dimensions, such as temporal data, spatial data, probabilistic data, confidence levels, etc
- …