833 research outputs found
Generating adaptive hypertext content from the semantic web
Accessing and extracting knowledge from online documents is crucial for therealisation of the Semantic Web and the provision of advanced knowledge services.
The Artequakt project is an ongoing investigation tackling these issues to facilitate the creation of tailored biographies from information harvested from the web.
In this paper we will present the methods we currently use to model, consolidate and store knowledge extracted from the web so that it can be re-purposed as adaptive content. We look at how Semantic Web technology could be used within this process and also how such techniques might be used to provide content to be published via the Semantic Web
Definitions in ontologies
Definitions vary according to context of use and target audience. They must be made relevant for each context to fulfill their cognitive and linguistic goals. This involves adapting their logical structure, type of content, and form to each context of use. We examine from these perspectives the case of definitions in ontologies
Ontea: Platform for Pattern Based Automated Semantic Annotation
Automated annotation of web documents is a key challenge of the Semantic Web effort. Semantic metadata can be created manually or using automated annotation or tagging tools. Automated semantic annotation tools with best results are built on various machine learning algorithms which require training sets. Other approach is to use pattern based semantic annotation solutions built on natural language processing, information retrieval or information extraction methods. The paper presents Ontea platform for automated semantic annotation or semantic tagging. Implementation based on regular expression patterns is presented with evaluation of results. Extensible architecture for integrating pattern based approaches is presented. Most of existing semi-automatic annotation solutions can not prove it real usage on large scale data such as web or email communication, but semantic web can be exploited only when computer understandable metadata will reach critical mass. Thus we also present approach to large scale pattern based annotation
Ontologies on the semantic web
As an informational technology, the World Wide Web has enjoyed spectacular success. In just ten years it has transformed the way information is produced, stored, and shared in arenas as diverse as shopping, family photo albums, and high-level academic research. The âSemantic Webâ was touted by its developers as equally revolutionary but has not yet achieved anything like the Webâs exponential uptake. This 17 000 word survey article explores why this might be so, from a perspective that bridges both philosophy and IT
Comprehensive service semantics and light-weight Linked Services: towards an integrated approach
Semantics are used to mark up a wide variety of data-centric Web resources but, are not used in significant numbers to annotate online services â that is despite considerable research dedicated to Semantic Web Services (SWS). This is partially due to the complexity of comprehensive SWS models aiming at automation of service-oriented tasks such as discovery, composition, and execution. This has led to the emergence of a new approach dubbed Linked Services which is based on simplified service models that are easier to populate and interpret and accessible even to non-experts. However, such Minimal Service Models so far do not cover all execution-related aspects of service automation and merely aim at enabling more comprehensive service search and clustering. Thus, in this paper, we describe our approach of combining the strengths of both distinct approaches to modeling Semantic Web Services â âlightweightâ Linked Services and âheavyweightâ SWS automation â into a coherent SWS framework. In addition, an implementation of our approach based on existing SWS tools together with a proof-of-concept prototype used within the EU project NoTube is presented
Dwelling on ontology - semantic reasoning over topographic maps
The thesis builds upon the hypothesis that the spatial arrangement of topographic
features, such as buildings, roads and other land cover parcels, indicates how land is
used. The aim is to make this kind of high-level semantic information explicit within
topographic data. There is an increasing need to share and use data for a wider range of
purposes, and to make data more definitive, intelligent and accessible. Unfortunately,
we still encounter a gap between low-level data representations and high-level concepts
that typify human qualitative spatial reasoning. The thesis adopts an ontological
approach to bridge this gap and to derive functional information by using standard
reasoning mechanisms offered by logic-based knowledge representation formalisms. It
formulates a framework for the processes involved in interpreting land use information
from topographic maps. Land use is a high-level abstract concept, but it is also an
observable fact intimately tied to geography. By decomposing this relationship, the
thesis correlates a one-to-one mapping between high-level conceptualisations
established from human knowledge and real world entities represented in the data.
Based on a middle-out approach, it develops a conceptual model that incrementally
links different levels of detail, and thereby derives coarser, more meaningful
descriptions from more detailed ones. The thesis verifies its proposed ideas by
implementing an ontology describing the land use âresidential areaâ in the ontology
editor Protégé. By asserting knowledge about high-level concepts such as types of
dwellings, urban blocks and residential districts as well as individuals that link directly
to topographic features stored in the database, the reasoner successfully infers instances
of the defined classes. Despite current technological limitations, ontologies are a
promising way forward in the manner we handle and integrate geographic data,
especially with respect to how humans conceptualise geographic space
- âŠ