83,558 research outputs found

    Requirements and services for metadata management

    Get PDF
    Knowledge-intensive applications pose new challenges to metadata management, including distribution, access control, uniformity of access, and evolution in time. The authors identify general requirements for metadata management and describe a simple model and service that focuses on RDF metadata to address these requirements

    Aggregation and management of metadata in the context of Europeana

    Get PDF
    The creation of connected content and the linking of metadata are basic requirements for the realisation of the semantic web. Semantic linkage of data enables the joint search of heterogeneous databases and facilitates future machine learning. The present article outlines the metadata management and metadata linking activities of the European Digital Library. A short overview on the current core research areas and implementation strategies in this field is presented. Various projects and metadata services tailored to natural history data, regional cultural heritage data and audio collections are described

    FAIR Data Commons / Essential Services and Tools for Metadata Management Supporting Science

    Get PDF
    A sophisticated ensemble of services and tools enables high-level research data and research metadata management in science. On a technical level, research datasets need to be registered, preserved, and made interactively accessible using repositories that meet the specific requirements of scientists in terms of flexibility and performance. These requirements are fulfilled by the Base Repo and the MetaStore of the KIT Data Manager Framework. In our data management architecture, data and metadata are represented as FAIR Digital Objects that are machine actionable. The Typed PID Maker and the FAIR Digital Object Lab provide support for the creation and management of data objects. Other tools enable editing of metadata documents, annotation of data and metadata, building collections of data objects, and creating controlled vocabularies. Information systems such as the Metadata Standards Catalog and the Data Collections Explorer help researchers select domain-specific metadata standards and schemas and identify data collections of interest. Infrastructure developers search the Catalog of Repository Systems for information on modern repository systems, and the FAIR Digital Object Cookbook for recipes for creating FAIR Digital Objects. Existing knowledge about metadata management, services, tools, and information systems has been applied to create research data management architectures for a variety of fields, including digital humanities, materials science, biology, and nanoscience. For Scanning Electron Microscopy, Transmission Electron Microscopy and Magnetic Resonance Imaging, metadata schemas were developed in close cooperation with the domain specialists and incorporated in the research data management architectures. This research has been supported by the research program ‘Engineering Digital Futures’ of the Helmholtz Association of German Research Centers, the Helmholtz Metadata Collaboration (HMC) Platform, the German National Research Data Infrastructure (NFDI), the German Research Foundation (DFG) and the Joint Lab “Integrated Model and Data Driven Materials Characterization (MDMC)”. Also, this project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101007417 within the framework of the NFFA-Europe Pilot (NEP) Joint Activities

    Research data support at UNCG: A metadata perspective [slides]

    Get PDF
    Slides from a presentation discussing the integration of support for data set sharing and management in an established institutional repository system, within the larger framework of an academic library that is working to expand its research support services in a climate of budget cuts and decreased resources. UNCG enhances the discoverability of institutional research data and provides a local option to assist researchers in fulfilling requirements set by funding agencies and data management plans through a partnership between the Odum Institute at UNC Chapel Hill and NC DOCKS, UNCG’s institutional repository (http://libres.uncg.edu/ir/uncg/). The expansion of research support and data set services at UNCG has brought increased needs for metadata support, both in the electronic systems associated with the project, and in providing training and support to researchers and liaison librarians. The presenter will give an overview of project roles, workflows, challenges, and lessons learned, with a focus on metadata-related issues. This presentation was delivered online on June 2, 2015 as part of the ALCTS Metadata Interest Group ALA Virtual Preconference: "Planning for the Evolving Role of Metadata Services.

    Next-Generation EU DataGrid Data Management Services

    Full text link
    We describe the architecture and initial implementation of the next-generation of Grid Data Management Middleware in the EU DataGrid (EDG) project. The new architecture stems out of our experience and the users requirements gathered during the two years of running our initial set of Grid Data Management Services. All of our new services are based on the Web Service technology paradigm, very much in line with the emerging Open Grid Services Architecture (OGSA). We have modularized our components and invested a great amount of effort towards a secure, extensible and robust service, starting from the design but also using a streamlined build and testing framework. Our service components are: Replica Location Service, Replica Metadata Service, Replica Optimization Service, Replica Subscription and high-level replica management. The service security infrastructure is fully GSI-enabled, hence compatible with the existing Globus Toolkit 2-based services; moreover, it allows for fine-grained authorization mechanisms that can be adjusted depending on the service semantics.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla,Ca, USA, March 2003 8 pages, LaTeX, the file contains all LaTeX sources - figures are in the directory "figures

    The DAMES Metadata Approach

    Get PDF
    The DAMES project will provide high quality data management activities services to the social science research community based on an e-social science infrastructure. The infrastructure is supported by the collection and use of metadata to describe datasets and other social science resources. This report reviews the metadata requirements of the DAMES services, reviews a number of metadata standards, and discusses how the selected standards can be used to support the DAMES services. The kinds of metadata focussed upon in this report include metadata for describing social science microdatasets and other resources such as data analysis processing instruction files, metadata for grouping and linking datasets, and metadata for describing the provenance of data as it is transformed through analytical procedures. The social science metadata standards reviewed include: • The Common Warehouse Metamodel (CWM) • The Data Documentation Initiative (DDI) versions 2 and 3 • Dublin Core • Encoded Archival Description (EAD) • e-Government Metadata Standard (e-GMS) • ELSST and HASSET • MAchine-Readable Cataloging (MARC) • Metadata Encoding and Transmission Standard (METS) • MetaDater • Open Archives Initiative (OAI) • Open Archival Information System (OAIS) • Statistical Data and Metadata Exchange (SDMX) • Text Encoding Initiative (TEI) The review concludes that the DDI standard version 3.0 is the most appropriate one to be used in the DAMES project and explains how best to integrate the standard into the project. This includes a description of how to capture metadata upon resource registration, upgrade the metadata from accessible resources available throughthe GEODE project, use the metadata for resource discovery, and generate provenance metadata during data transformation procedures. In addition, a “metadata wizard” is described to help with data management activities

    eBank UK: linking research data, scholarly communication and learning

    No full text
    This paper includes an overview of the changing landscape of scholarly communication and describes outcomes from the innovative eBank UK project, which seeks to build links from e-research through to e-learning. As introduction, the scholarly knowledge cycle is described and the role of digital repositories and aggregator services in linking data-sets from Grid-enabled projects to e-prints through to peer-reviewed articles as resources in portals and Learning Management Systems, are assessed. The development outcomes from the eBank UK project are presented including the distributed information architecture, requirements for common ontologies, data models, metadata schema, open linking technologies, provenance and workflows. Some emerging challenges for the future are presented in conclusion

    Research Data and the Duke Digital Repository: Building a Service-Driven Data Curation Workflow

    Get PDF
    This year, Duke University Libraries launched the Duke Digital Repository (DDR), a service that supports the activities of the University\u27s faculty, researchers, students, and library staff by preserving, securing, and providing access to digital resources. The DDR is built on three program areas: research data, scholarly publications, and library collections. The research data area of the DDR reflects a new focus on research data management services at Duke, which includes preservation-level storage in the DDR, as well as consulting and instruction. The new program of services is supported by four new staff members (two research data management consultants and two digital repository content analysts), as well as a CLIR postdoctoral fellow. Along with two subject specialist librarians and the head of Data and Visualization Services, we form the Research Data Working Group at Duke Libraries, which provides policy, procedure, and technical recommendations on data curation and management issues related to the library’s repository program. We are currently piloting new services, policies, and workflows to promote data sharing and curation, quality metadata, and reproducible research, offering consultations and workshops in research data management best practices. This poster will report on the organizational structures, data sharing policies, metadata requirements, and workflows for data ingest that are currently in place

    Standard formatted data units-control authority operations

    Get PDF
    The purpose of this document is to illustrate a Control Authority's (CA) possible operation. The document is an interpretation and expansion of the concept found in the CA Procedures Recommendation. The CA is described in terms of the functions it performs for the management and control of data descriptions (metadata). Functions pertaining to the organization of Member Agency Control Authority Offices (MACAOs) (e.g., creating and disbanding) are not discussed. The document also provides an illustrative operational view of a CA through scenarios describing interaction between those roles involved in collecting, controlling, and accessing registered metadata. The roles interacting with the CA are identified by their actions in requesting and responding to requests for metadata, and by the type of information exchanged. The scenarios and examples presented in this document are illustrative only. They represent possible interactions supported by either a manual or automated system. These scenarios identify requirements for an automated system. These requirements are expressed by identifying the information to be exchanged and the services that may be provided by a CA for that exchange

    Tracking Data in Open Learning Environments

    Get PDF
    The collection and management of learning traces, metadata about actions that students perform while they learn, is a core topic in the domain of Learning Analytics. In this paper, we present a simple architecture for collecting and managing learning traces. We describe requirements, different components of the architecture, and our experiences with the successful deployment of the architecture in two different case studies: a blended learning university course and an enquiry based learning secondary school course. The architecture relies on trackers, collecting agents that fetch data from external services, for flexibility and configurability. In addition, we discuss how our architecture meets the requirements of different learning environments, critical reflections and remarks on future work
    corecore