3,421 research outputs found
Ontology-based Information Extraction with SOBA
In this paper we describe SOBA, a sub-component of the SmartWeb multi-modal dialog system. SOBA is a component for ontologybased information extraction from soccer web pages for automatic population of a knowledge base that can be used for domainspecific question answering. SOBA realizes a tight connection between the ontology, knowledge base and the information extraction component. The originality of SOBA is in the fact that it extracts information from heterogeneous sources such as tabular structures, text and image captions in a semantically integrated way. In particular, it stores extracted information in a knowledge base, and in turn uses the knowledge base to interpret and link newly extracted information with respect to already existing entities
Mapping languages analysis of comparative characteristics
RDF generation processes are becoming more interoperable, reusable, and maintainable due to the increased usage of mapping languages: languages used to describe how to generate an RDF graph from (semi-)structured data. This gives rise to new mapping languages, each with different characteristics. However, it is not clear which mapping language is fit for a given task. Thus, a comparative framework is needed. In this paper, we investigate a set of mapping languages that inhibit complementary characteristics, and present an initial set of comparative characteristics based on requirements as put forward by the reference works of those mapping languages. Initial investigation found 9 broad characteristics, classified in 3 categories. To further formalize and complete the set of characteristics, further investigation is needed, requiring a joint effort of the community
The generation of e-learning exercise problems from subject ontologies
The teaching/ learning of cognitive skills, such as
problem-solving, is an important goal in most forms of
education. In well-structured subject areas certain
exercise problem types may be precisely described by
means of machine-processable knowledge structures
or ontologies. These ontologies can readily be used to
generate individual problem examples for the student,
where each problem consists of a question and its
solution. An example is given from the subject domain
of computer databases
Artequakt: Generating tailored biographies from automatically annotated fragments from the web
The Artequakt project seeks to automatically generate narrativebiographies of artists from knowledge that has been extracted from the Web and maintained in a knowledge base. An overview of the system architecture is presented here and the three key components of that architecture are explained in detail, namely knowledge extraction, information management and biography construction. Conclusions are drawn from the initial experiences of the project and future progress is detailed
Visual exploration of semantic-web-based knowledge structures
Humans have a curious nature and seek a better understanding of the world. Data, in-
formation, and knowledge became assets of our modern society through the information
technology revolution in the form of the internet. However, with the growing size of
accumulated data, new challenges emerge, such as searching and navigating in these large
collections of data, information, and knowledge. The current developments in academic
and industrial contexts target the corresponding challenges using Semantic Web techno-
logies. The Semantic Web is an extension of the Web and provides machine-readable
representations of knowledge for various domains. These machine-readable representations
allow intelligent machine agents to understand the meaning of the data and information;
and enable additional inference of new knowledge.
Generally, the Semantic Web is designed for information exchange and its processing
and does not focus on presenting such semantically enriched data to humans. Visualizations
support exploration, navigation, and understanding of data by exploiting humansâ ability
to comprehend complex data through visual representations. In the context of Semantic-
Web-Based knowledge structures, various visualization methods and tools are available,
and new ones are being developed every year. However, suitable visualizations are highly
dependent on individual use cases and targeted user groups.
In this thesis, we investigate visual exploration techniques for Semantic-Web-Based
knowledge structures by addressing the following challenges: i) how to engage various user
groups in modeling such semantic representations; ii) how to facilitate understanding using
customizable visual representations; and iii) how to ease the creation of visualizations
for various data sources and different use cases. The achieved results indicate that visual
modeling techniques facilitate the engagement of various user groups in ontology modeling.
Customizable visualizations enable users to adjust visualizations to the current needs and
provide different views on the data. Additionally, customizable visualization pipelines
enable rapid visualization generation for various use cases, data sources, and user group
Automated Development of Semantic Data Models Using Scientific Publications
The traditional methods for analyzing information in digital documents have evolved with the ever-increasing volume of data. Some challenges in analyzing scientific publications include the lack of a unified vocabulary and a defined context, different standards and formats in presenting information, various types of data, and diverse areas of knowledge. These challenges hinder detecting, understanding, comparing, sharing, and querying information rapidly.
I design a dynamic conceptual data model with common elements in publications from any domain, such as context, metadata, and tables. To enhance the models, I use related definitions contained in ontologies and the Internet. Therefore, this dissertation generates semantically-enriched data models from digital publications based on the Semantic Web principles, which allow people and computers to work cooperatively. Finally, this work uses a vocabulary and ontologies to generate a structured characterization and organize the data models. This organization allows integration, sharing, management, and comparing and contrasting information from publications
- âŠ