61 research outputs found

    A Case Study on Linked Data for University Courses

    Get PDF
    Obuda University wanted to build a linked dataset describing their courses in the semester. The concepts to be covered included curricula, subjects, courses, semesters and educators. A particular use case needed the description of lecture rooms and events as well. Although there are several ontologies for the mentioned domains, selecting a set of ontologies fitting our use case was not an easy task. After realizing the problems, we created the Ontology for Linked Open University Data (OLOUD) to fill in the gaps between re-used ontologies. OLOUD acts as a glue for a selection of existing ontologies, and thus enables us to formulate SPARQL queries for a wide range of practical questions of university students. OLOUD integrates data from several sources and provides personal timetables, navigation and other types of help for students and lecturers

    An Approach to Publish Scientific Data of Open-Access Journals Using Linked Data Technologies

    Get PDF
    Semantic Web encourages digital libraries, including open access journals, to collect, link and share their data across the Web in order to ease its processing by machines and humans to get better queries and results. Linked Data technologies enable connecting related data across the Web using the principles and recommendations set out by Tim Berners-Lee in 2006. Several universities develop knowledge through scholarship and research with open access policies for the generated knowledge, using several ways to disseminate information. Open access journals collect, preserve and publish scientific information in digital form related to a particular academic discipline in a peer review process having a big potential for exchanging and spreading their data linked to external resources using Linked Data technologies. Linked Data can increase those benefits with better queries about the resources and their relationships. This paper reports a process for publishing scientific data on the Web using Linked Data technologies. Furthermore, methodological guidelines are presented with related activities. The proposed process was applied extracting data from a university Open Journal System and publishing in a SPARQL endpoint using the open source edition of OpenLink Virtuoso. In this process, the use of open standards facilitates the creation, development and exploitation of knowledge.This research has been partially supported by the Prometeo project by SENESCYT, Ecuadorian Government and by CEDIA (Consorcio Ecuatoriano para el Desarrollo de Internet Avanzado) supporting the project: “Platform for publishing library bibliographic resources using Linked Data technologies”

    Transforming Library Catalogs into Linked Data

    Get PDF
    Traditionally, in most digital library environments, the discovery of resources takes place mostly through the harvesting and indexing of the metadata content. Such search and retrieval services provide very effective ways for persons to find items of interest but lacks the ability to lead users looking for potential related resources or to make more complex queries. In contrast, modern web information management techniques related to Semantic Web, a new form of the Web, encourages institutions, including libraries, to collect, link and share their data across the web in order to ease its processing by machines and humans offering better queries and results increasing the visibility and interoperability of the data. Linked Data technologies enable connecting related data across the Web using the principles and recommendations set out by Tim Berners-Lee in 2006, resulting on the use of URIs (Uniform Resource Identifier) as identifiers for objects, and the use of RDF (Resource Description Framework) for links representation. Today, libraries are giving increasing importance to the Semantic Web in a variety of ways like creating metadata models and publishing Linked Data from authority files, bibliographic catalogs, digital projects information or crowdsourced information from another projects like Wikipedia. This paper reports a process for publishing library metadata on the Web using Linked Data technologies. The proposed process was applied for extracting metadata from a university library, representing them in RDF format and publishing them using a Sparql endpoint (an interface to a knowledge database). The library metadata from a subject were linked to external sources such us another libraries and then related to the bibliography from syllabus of the courses in order to discover missing subjects and new or out of date bibliography. In this process, the use of open standards facilitates the exploitation of knowledge from libraries.This research has been partially supported by the Prometeo project by SENESCYT, Ecuadorian Government, by CEDIA (Consorcio Ecuatoriano para el Desarrollo de Internet Avanzado) supporting the project: “Platform for publishing library bibliographic resources using Linked Data technologies” and by the project GEODAS-BI (TIN2012-37493-C03-03) supported by the Ministry of Economy and Competitiveness of Spain (MINECO)

    A software processing chain for evaluating thesaurus quality

    Get PDF
    Thesauri are knowledge models commonly used for information classification and retrieval whose structure is defined by standards that describe the main features the concepts and relations must have. However, following these standards requires a deep knowledge of the field the thesaurus is going to cover and experience in their creation. To help in this task, this paper describes a software processing chain that provides different validation components that evaluates the quality of the main thesaurus features

    SMART Protocols: seMAntic represenTation for experimental protocols

    Full text link
    Two important characteristics of science are the ?reproducibility? and ?clarity?. By rigorous practices, scientists explore aspects of the world that they can reproduce under carefully controlled experimental conditions. The clarity, complementing reproducibility, provides unambiguous descriptions of results in a mechanical or mathematical form. Both pillars depend on well-structured and accurate descriptions of scientific practices, which are normally recorded in experimental protocols, scientific workflows, etc. Here we present SMART Protocols (SP), our ontology-based approach for representing experimental protocols and our contribution to clarity and reproducibility. SP delivers an unambiguous description of processes by means of which data is produced; by doing so, we argue, it facilitates reproducibility. Moreover, SP is thought to be part of e-science infrastructures. SP results from the analysis of 175 protocols; from this dataset, we extracted common elements. From our analysis, we identified document, workflow and domain-specific aspects in the representation of experimental protocols. The ontology is available at http://purl.org/net/SMARTprotoco

    Building an ontology catalogue for smart cities

    Get PDF
    Apart from providing semantics and reasoning power to data, ontologies enable and facilitate interoperability across heterogeneous systems or environments. A good practice when developing ontologies is to reuse as much knowledge as possible in order to increase interoperability by reducing heterogeneity across models and to reduce development effort. Ontology registries, indexes and catalogues facilitate the task of finding, exploring and reusing ontologies by collecting them from different sources. This paper presents an ontology catalogue for the smart cities and related domains. This catalogue is based on curated metadata and incorporates ontology evaluation features. Such catalogue represents the first approach within this community and it would be highly useful for new ontology developments or for describing and annotating existing ontologies

    Developing ontologies for representing data about key performance indicators

    Get PDF
    Multiple indicators are of interest in smart cities at different scales and for different stakeholders. In open environments, such as The Web, or when indicator information has to be interchanged across systems, contextual information (e.g., unit of measurement, measurement method) should be transmitted together with the data and the lack of such information might cause undesirable effects. Describing the data by means of ontologies increases interoperability among datasets and applications. However, methodological guidance is crucial during ontology development in order to transform the art of modeling in an engineering activity. In the current paper, we present a methodological approach for modelling data about Key Performance Indicators and their context with an application example of such guidelines

    Detecting Good Practices and Pitfalls when Publishing Vocabularies on the Web

    Get PDF
    The uptake of Linked Data (LD) has promoted the proliferation of datasets and their associated ontologies bringing their semantic to the data being published. These ontologies should be evaluated at different stages, both during their development and their publication. As important as correctly modelling the intended part of the world to be captured in an ontology, is publishing, sharing and facilitating the (re)use of the obtained model. In this paper, 11 evaluation characteristics, with respect to publish, share and facilitate the reuse, are proposed. In particular, 6 good practices and 5 pitfalls are presented, together with their associated detection methods. In addition, a grid-based rating system is generated. Both contributions, the set of evaluation characteristics and the grid system, could be useful for ontologists in order to reuse existing LD vocabularies or to check the one being built

    The Current Landscape of Pitfalls in Ontologies

    Get PDF
    A growing number of ontologies are already available thanks to development initiatives in many different fields. In such ontology developments, developers must tackle a wide range of difficulties and handicaps, which can result in the appearance of anomalies in the resulting ontologies. Therefore, ontology evaluation plays a key role in ontology development projects. OOPS! is an on-line tool that automatically detects pitfalls, considered as potential errors or problems, and thus may help ontology developers to improve their ontologies. To gain insight in the existence of pitfalls and to assess whether there are differences among ontologies developed by novices, a random set of already scanned ontologies, and existing well-known ones, data of 406 OWL ontologies were analysed on OOPS!’s 21 pitfalls, of which 24 ontologies were also examined manually on the detected pitfalls. The various analyses performed show only minor differences between the three sets of ontologies, therewith providing a general landscape of pitfalls in ontologies
    • …
    corecore