146 research outputs found

    A panoramic view on metadata application profiles of the last decade

    Get PDF
    This paper describes a study developed with the goal to understand the panorama of the metadata Application Profiles (AP): (i) what AP have been developed so far; (ii) what type of institutions have developed these AP; (iii) what are the application domains of these AP; (iv) what are the Metadata Schemes (MS) used by these AP; (v) what application domains have been producing MS; (vi) what are the Syntax Encoding Schemes (SES) and the Vocabulary Encoding Schemes (VES) used by these AP; and finally (vii) if these AP have followed the Singapore Framework (SF). We found (i) 74 AP; (ii) the AP are mostly developed by the scientific community, (iii) the ‘Learning Objects’ domain is the most intensive producer; (iv) Dublin Core metadata vocabularies are the most used and are being used in all domains of application and IEEE LOM is the second most used but only inside the ‘Learning Objects’ application domain; (v) the most intensive producer of MS is the domain of ‘Libraries and Repositories’; (vi) 13 distinct SES and 90 distinct VES were used; (vi) five of the 74 AP found follow the SF.This work is sponsored by FEDER funds through the Competitivity Factors Operational Programme (COMPETE) and by National funds through Foundation for Science and Technology (FCT) within the scope of the project: FCOMP01-0124-FFEDER-022674.info:eu-repo/semantics/publishedVersio

    A Model to Represent Nomenclatural and Taxonomic Information as Linked Data. Application to the French Taxonomic Register, TAXREF

    Get PDF
    International audienceTaxonomic registers are key tools to help us comprehend the diversity of nature. Publishing such registers in the Web of Data, following the standards and best practices of Linked Open Data (LOD), is a way of integrating multiple data sources into a world-scale, biological knowledge base. In this paper, we present an ongoing work aimed at the publication of TAXREF, the French national taxonomic register, on the Web of Data. Far beyond the mere translation of the TAXREF database into LOD standards, we show that the key point of this endeavor is the design of a model capable of capturing the two coexisting yet distinct realities underlying taxonomic registers, namely the nomenclature (the rules for naming biological entities) and the taxonomy (the description and characterization of these biological entities). We first analyze different modelling choices made to represent some international taxonomic registers as LOD, and we underline the issues that arise from these differences. Then, we propose a model aimed to tackle these issues. This model separates nomenclature from taxonomy, it is flexible enough to accommodate the ever-changing scientific consensus on taxonomy, and it adheres to the philosophy underpinning the Semantic Web standards. Finally, using the example of TAXREF, we show that the model enables interlinking with third-party LOD data sets, may they represent nomenclatural or taxonomic information

    Bioschemas & Schema.org: a Lightweight Semantic Layer for Life Sciences Websites

    Get PDF
    International audienceAbstract of a poster presented at the TDWG 2018 conference on Biodiversity Information Standards

    Semantic annotation of natural history collections

    Get PDF
    Large collections of historical biodiversity expeditions are housed in natural history museums throughout the world. Potentially they can serve as rich sources of data for cultural historical and biodiversity research. However, they exist as only partially catalogued specimen repositories and images of unstructured, non-standardised, hand-written text and drawings. Although many archival collections have been digitised, disclosing their content is challenging. They refer to historical place names and outdated taxonomic classifications and are written in multiple languages. Efforts to transcribe the hand-written text can make the content accessible, but semantically describing and interlinking the content would further facilitate research. We propose a semantic model that serves to structure the named entities in natural history archival collections. In addition, we present an approach for the semantic annotation of these collections whilst documenting their provenance. This approach serves as an initial step for an adaptive learning approach for semi-automated extraction of named entities from natural history archival collections. The applicability of the semantic model and the annotation approach is demonstrated using image scans from a collection of 8, 000 field book pages gathered by the Committee for Natural History of the Netherlands Indies between 1820 and 1850, and evaluated together with domain experts from the field of natural and cultural history.Computer Systems, Imagery and Medi

    Digging Into Data White Paper:Trading Consequences

    Get PDF
    Scholars interested in nineteenth century global economic history face a voluminous historical record. Conventional approaches to primary source research on the economic and environmental implications of globalised commodity flows typically restrict researchers to specific locations or a small handful of commodities. By taking advantage of cutting edge computational tools, the project was able to address much larger data sets for historical research, and thereby provides historians with the means to develop new data driven research questions. In particular, this project has demonstrated that text mining techniques applied to tens of thousands of documents about nineteenth century commodity trading can yield a novel understanding of how economic forces connected distant places all over the globe and how efforts to generate wealth from natural resources impacted on local environments. The large scale findings that result from the application of these new methodologies would be barely feasible using conventional research methods. Moreover, the project vividly demonstrates how the digital humanities can benefit from transdisciplinary collaboration between humanists, computational linguists and information visualisation expertsPostprin

    Integrating institutional repositories into the Semantic Web

    Get PDF
    The Web has changed the face of scientific communication; and the Semantic Web promises new ways of adding value to research material by making it more accessible to automatic discovery, linking, and analysis. Institutional repositories contain a wealth of information which could benefit from the application of this technology. In this thesis I describe the problems inherent in the informality of traditional repository metadata, and propose a data model based on the Semantic Web which will support more efficient use of this data, with the aim of streamlining scientific communication and promoting efficient use of institutional research output

    Ontologies to integrate learning design and learning content

    Get PDF
    Commentary on: Chapter 8: Basic Design Procedures for E-learning Courses (Sloep, Hummel & Manderveld, 2005)Abstract: The paper presents an ontology based approach to integrate learning designs and learning object content. The main goal is to increase the level of reusability of learning designs by enabling the use of a given learning design with different content. We first define a three-part conceptual model that introduces an intermediary level between learning design and learning objects called the learning object context. We then use ontologies to facilitate the representation of these concepts: LOCO is a new ontology for IMS-LD, ALOCoM is an existing ontology for learning objects, and LOCO-Cite is a new ontology for the contextual model. Building the LOCO ontology required correcting some inconsistencies in the present IMS LD Information Model. Finally, we illustrate the usefulness of the proposed approach on three use cases: finding a teaching method based on domain-related competencies, searching for learning designs based on domain-independent competencies, and creating user recommendations for both learning objects and learning designs.Editors: Colin Tattersall and Rob Koper

    Towards Interoperable Research Infrastructures for Environmental and Earth Sciences

    Get PDF
    This open access book summarises the latest developments on data management in the EU H2020 ENVRIplus project, which brought together more than 20 environmental and Earth science research infrastructures into a single community. It provides readers with a systematic overview of the common challenges faced by research infrastructures and how a ‘reference model guided’ engineering approach can be used to achieve greater interoperability among such infrastructures in the environmental and earth sciences. The 20 contributions in this book are structured in 5 parts on the design, development, deployment, operation and use of research infrastructures. Part one provides an overview of the state of the art of research infrastructure and relevant e-Infrastructure technologies, part two discusses the reference model guided engineering approach, the third part presents the software and tools developed for common data management challenges, the fourth part demonstrates the software via several use cases, and the last part discusses the sustainability and future directions

    Technological roadmap on AI planning and scheduling

    Get PDF
    At the beginning of the new century, Information Technologies had become basic and indispensable constituents of the production and preparation processes for all kinds of goods and services and with that are largely influencing both the working and private life of nearly every citizen. This development will continue and even further grow with the continually increasing use of the Internet in production, business, science, education, and everyday societal and private undertaking. Recent years have shown, however, that a dramatic enhancement of software capabilities is required, when aiming to continuously provide advanced and competitive products and services in all these fast developing sectors. It includes the development of intelligent systems – systems that are more autonomous, flexible, and robust than today’s conventional software. Intelligent Planning and Scheduling is a key enabling technology for intelligent systems. It has been developed and matured over the last three decades and has successfully been employed for a variety of applications in commerce, industry, education, medicine, public transport, defense, and government. This document reviews the state-of-the-art in key application and technical areas of Intelligent Planning and Scheduling. It identifies the most important research, development, and technology transfer efforts required in the coming 3 to 10 years and shows the way forward to meet these challenges in the short-, medium- and longer-term future. The roadmap has been developed under the regime of PLANET – the European Network of Excellence in AI Planning. This network, established by the European Commission in 1998, is the co-ordinating framework for research, development, and technology transfer in the field of Intelligent Planning and Scheduling in Europe. A large number of people have contributed to this document including the members of PLANET non- European international experts, and a number of independent expert peer reviewers. All of them are acknowledged in a separate section of this document. Intelligent Planning and Scheduling is a far-reaching technology. Accepting the challenges and progressing along the directions pointed out in this roadmap will enable a new generation of intelligent application systems in a wide variety of industrial, commercial, public, and private sectors
    corecore