290 research outputs found

    A Modular Semantic Annotation Framework: CellML Metadata Specifications 2.0

    Get PDF
    In the last decade or so, model encoding efforts such as CellML and SBML have greatly facilitated model availability. But, as the complexity of models increases, the utility of these models can vary. The addition of semantic information is crucial to transforming mathematical models from esoteric to informative resources. 

We have developed a metadata specification framework to better enable the annotation of CellML models with metadata. The framework consists of a core specification describing, in general terms, how annotations should be attached using RDF/XML, and satellite specifications covering several domains of immediate interest, using elements from the Dublin Core, FOAF (Friend-Of-A-Friend), BIBO (Bibliographic Ontology), MIRIAM URNs and Biomodels Qualifiers.

We also describe what we see as several emerging challenges in the field, uncovered during the application of this annotation scheme to mathematical models

    LORE: A Compound Object Authoring and Publishing Tool for Literary Scholars based on the FRBR

    Get PDF
    4th International Conference on Open RepositoriesThis presentation was part of the session : Conference PresentationsDate: 2009-06-04 10:30 AM – 12:00 PMThis paper presents LORE (Literature Object Re-use and Exchange), a light-weight tool designed to enable scholars and teachers of literature to author, edit and publish OAI-ORE-compliant compound information objects that encapsulate related digital resources and bibliographic records. LORE provides a graphical user interface for creating, labelling and visualizing typed relationships between individual objects using terms from a bibliographic ontology based on the IFLA FRBR. After creating a compound object, users can attach metadata and publish it to a Fedora repository (as an RDF graph) where it can be searched, retrieved, edited and re-used by others. LORE has been developed in the context of the Australian Literature Resource project (AustLit) and hence focuses on compound objects for teaching and research within the Australian literature studies community.NCRIS National eResearch Architecture Taskforce (NeAT

    Crowdsourcing Linked Data on listening experiences through reuse and enhancement of library data

    Get PDF
    Research has approached the practice of musical reception in a multitude of ways, such as the analysis of professional critique, sales figures and psychological processes activated by the act of listening. Studies in the Humanities, on the other hand, have been hindered by the lack of structured evidence of actual experiences of listening as reported by the listeners themselves, a concern that was voiced since the early Web era. It was however assumed that such evidence existed, albeit in pure textual form, but could not be leveraged until it was digitised and aggregated. The Listening Experience Database (LED) responds to this research need by providing a centralised hub for evidence of listening in the literature. Not only does LED support search and reuse across nearly 10,000 records, but it also provides machine-readable structured data of the knowledge around the contexts of listening. To take advantage of the mass of formal knowledge that already exists on the Web concerning these contexts, the entire framework adopts Linked Data principles and technologies. This also allows LED to directly reuse open data from the British Library for the source documentation that is already published. Reused data are re-published as open data with enhancements obtained by expanding over the model of the original data, such as the partitioning of published books and collections into individual stand-alone documents. The database was populated through crowdsourcing and seamlessly incorporates data reuse from the very early data entry phases. As the sources of the evidence often contain vague, fragmentary of uncertain information, facilities were put in place to generate structured data out of such fuzziness. Alongside elaborating on these functionalities, this article provides insights into the most recent features of the latest instalment of the dataset and portal, such as the interlinking with the MusicBrainz database, the relaxation of geographical input constraints through text mining, and the plotting of key locations in an interactive geographical browser

    DDB-EDM to FaBiO: The Case of the German Digital Library

    Get PDF
    Cultural heritage portals have the goal of providing users with seamless access to all their resources. This paper introduces initial efforts for a user-oriented restructuring of the German Digital Library (DDB). At present, cultural heritage objects (CHOs) in the DDB are modeled using an extended version of the Europeana Data Model (DDBEDM), which negatively impacts usability and exploration. These challenges can be addressed by exploiting ontologies, and building a knowledge graph from the DDB’s voluminous collection. Towards this goal, an alignment of bibliographic metadata from DDB-EDM to FRBR-Aligned Bibliographic Ontology (FaBiO) is presented

    Publishing a Scorecard for Evaluating the Use of Open-Access Journals Using Linked Data Technologies

    Get PDF
    Open access journals collect, preserve and publish scientific information in digital form, but it is still difficult not only for users but also for digital libraries to evaluate the usage and impact of this kind of publications. This problem can be tackled by introducing Key Performance Indicators (KPIs), allowing us to objectively measure the performance of the journals related to the objectives pursued. In addition, Linked Data technologies constitute an opportunity to enrich the information provided by KPIs, connecting them to relevant datasets across the web. This paper describes a process to develop and publish a scorecard on the semantic web based on the ISO 2789:2013 standard using Linked Data technologies in such a way that it can be linked to related datasets. Furthermore, methodological guidelines are presented with activities. The proposed process was applied to the open journal system of a university, including the definition of the KPIs linked to the institutional strategies, the extraction, cleaning and loading of data from the data sources into a data mart, the transforming of data into RDF (Resource Description Framework), and the publication of data by means of a SPARQL endpoint using the OpenLink Virtuoso application. Additionally, the RDF data cube vocabulary has been used to publish the multidimensional data on the web. The visualization was made using CubeViz a faceted browser to present the KPIs in interactive charts.This work has been partially supported by the Prometeo Project by SENESCYT, Ecuadorian Government

    Mapping subjectivity: performing people-centered vocabulary alignment

    Get PDF
    This paper describes a mapping of linked data vocabularies in the area of person-related information. Aligning vocabulary terms may help curb the problem of property proliferation that occurs in linked data environments. It also facilitates the process of choosing semantics for vocabulary extensions and integration in the context of linked data applications. Although a work in progress, this investigation would provide support for semantic integration and for knowledge sharing and reuse in the area of personal information representation. It also offers an opportunity to reflect on a new generation of knowledge organization systems such as linked data vocabularies that have started to populate the web and are converging with new representation models and discovery tools in libraries and other cultural heritage institutions

    An Approach to Publish Statistics from Open-Access Journals Using Linked Data Technologies

    Get PDF
    Semantic Web encourages digital libraries which include open access journals, to collect, link and share their data across the web in order to ease its processing by machines and humans to get better queries and results. Linked Data technologies enable connecting structured data across the web using the principles and recommendations set out by Tim Berners-Lee in 2006. Several universities develop knowledge, through scholarship and research, under open access policies and use several ways to disseminate information. Open access journals collect, preserve and publish scientific information in digital form using a peer review process. The evaluation of the usage of this kind of publications needs to be expressed in statistics and linked to external resources to give better information about the resources and their relationships. The statistics expressed in a data mart facilitate queries about the history of journals usage by several criteria. This data linked to another datasets gives more information such as: the topics in the research, the origin of the authors, the relation to the national plans, and the relations about the study curriculums. This paper reports a process for publishing an open access journal data mart on the Web using Linked Data technologies in such a way that it can be linked to related datasets. Furthermore, methodological guidelines are presented with related activities. The proposed process was applied extracting statistical data from a university open journal system and publishing it in a SPARQL endpoint using the open source edition of the software OpenLink Virtuoso. In this process the use of open standards facilitates the creation, development and exploitation of knowledge. The RDF Data Cube vocabulary has been used as a model for publishing the multidimensional data on the Web. The visualization was made using CubeViz a faceted browser filtering observations to be presented interactively in charts. The proposed process help to publish statistical datasets in an easy way.This work has been partially supported by the Prometeo Project by SENESCYT, Ecuadorian Government
    • …
    corecore