923 research outputs found
Pathways: Augmenting interoperability across scholarly repositories
In the emerging eScience environment, repositories of papers, datasets,
software, etc., should be the foundation of a global and natively-digital
scholarly communications system. The current infrastructure falls far short of
this goal. Cross-repository interoperability must be augmented to support the
many workflows and value-chains involved in scholarly communication. This will
not be achieved through the promotion of single repository architecture or
content representation, but instead requires an interoperability framework to
connect the many heterogeneous systems that will exist.
We present a simple data model and service architecture that augments
repository interoperability to enable scholarly value-chains to be implemented.
We describe an experiment that demonstrates how the proposed infrastructure can
be deployed to implement the workflow involved in the creation of an overlay
journal over several different repository systems (Fedora, aDORe, DSpace and
arXiv).Comment: 18 pages. Accepted for International Journal on Digital Libraries
special issue on Digital Libraries and eScienc
The aDORe Federation Architecture
The need to federate repositories emerges in two distinctive scenarios. In
one scenario, scalability-related problems in the operation of a repository
reach a point beyond which continued service requires parallelization and hence
federation of the repository infrastructure. In the other scenario, multiple
distributed repositories manage collections of interest to certain communities
or applications, and federation is an approach to present a unified perspective
across these repositories. The high-level, 3-Tier aDORe federation architecture
can be used as a guideline to federate repositories in both cases. This paper
describes the architecture, consisting of core interfaces for federated
repositories in Tier-1, two shared infrastructure components in Tier-2, and a
single-point of access to the federation in Tier-3. The paper also illustrates
two large-scale deployments of the aDORe federation architecture: the aDORe
Archive repository (over 100,000,000 digital objects) at the Los Alamos
National Laboratory and the Ghent University Image Repository federation
(multiple terabytes of image files).Comment: 43 pages, 4 figures, 2 table
Core Services in the Architecture of the National Digital Library for Science Education (NSDL)
We describe the core components of the architecture for the (NSDL) National
Science, Mathematics, Engineering, and Technology Education Digital Library.
Over time the NSDL will include heterogeneous users, content, and services. To
accommodate this, a design for a technical and organization infrastructure has
been formulated based on the notion of a spectrum of interoperability. This
paper describes the first phase of the interoperability infrastructure
including the metadata repository, search and discovery services, rights
management services, and user interface portal facilities
Suggested measures for deploying IIIF in Swiss cultural heritage institutions
This white paper has been written as part of the Towards IIIF-Compliance Knowledge in Switzerland (TICKS) project, conducted at the Haute école de gestion de Genève (HEG-GE) between March 2018 and February 2019, which originated on the acknowledgements that the International Image Interoperability Framework (IIIF) ecosystem was not enough known and deployed in the cultural heritage field in Switzerland. The white paper starts with the main principles of IIIF, notably indicating the different technical specifications, or application programming interfaces (APIs), produced by the IIIF community as well as the platforms of Swiss projects or institutions that have deployed IIIF. Going from general to specific, a generic IIIF step-by-step graph and six more precise use cases reflecting different needs of the GLAM (Galleries, Libraries, Archives, Museums) sector giving implementation measures have been produced. Finally, the document contains recommendations for further action as well as some information on the possible reuse of this document for other regions of the world or for other scientific fields
Forgotten as data – remembered through information. Social memory institutions in the digital age: the case of the Europeana Initiative
The study of social memory has emerged as a rich field of research closely linked
to cultural artefacts, communication media and institutions as carriers of a past
that transcends the horizon of the individual’s lifetime. Within this domain of
research, the dissertation focuses on memory institutions (libraries, archives,
museums) and the shifts they are undergoing as the outcome of digitization and
the diffusion of online media. Very little is currently known about the impact that
digitality and computation may have on social memory institutions, specifically,
and social memory, more generally – an area of study that would benefit from
but, so far, has been mostly overlooked by information systems research.
The dissertation finds its point of departure in the conceptualization of
information as an event that occurs through the interaction between an observer
and the observed – an event that cannot be stored as information but merely as
data. In this context, memory is conceived as an operation that filters, thus
forgets, the singular details of an information event by making it comparable to
other events according to abstract classification criteria. Against this backdrop,
memory institutions are institutions of forgetting as they select, order and
preserve a canon of cultural heritage artefacts.
Supported by evidence from a case study on the Europeana initiative (a
digitization project of European libraries, archives and museums), the
dissertation reveals a fundamental shift in the field of memory institutions. The
case study demonstrates the disintegration of 1) the cultural heritage artefact, 2)
its standard modes of description and 3) the catalogue as such into a steadily
accruing assemblage of data and metadata. Dismembered into bits and bytes,
cultural heritage needs to be re-membered through the emulation of recognizable
cultural heritage artefacts and momentary renditions of order. In other words,
memory institutions forget as binary-based data and remember through
computational information
Analysis of the Usability of Automatically Enriched Cultural Heritage Data
This chapter presents the potential of interoperability and standardised data
publication for cultural heritage resources, with a focus on community-driven
approaches and web standards for usability. The Linked Open Usable Data (LOUD)
design principles, which rely on JSON-LD as lingua franca, serve as the
foundation.
We begin by exploring the significant advances made by the International
Image Interoperability Framework (IIIF) in promoting interoperability for
image-based resources. The principles and practices of IIIF have paved the way
for Linked Art, which expands the use of linked data by demonstrating how it
can easily facilitate the integration and sharing of semantic cultural heritage
data across portals and institutions.
To provide a practical demonstration of the concepts discussed, the chapter
highlights the implementation of LUX, the Yale Collections Discovery platform.
LUX serves as a compelling case study for the use of linked data at scale,
demonstrating the real-world application of automated enrichment in the
cultural heritage domain.
Rooted in empirical study, the analysis presented in this chapter delves into
the broader context of community practices and semantic interoperability. By
examining the collaborative efforts and integration of diverse cultural
heritage resources, the research sheds light on the potential benefits and
challenges associated with LOUD.Comment: This is the preprint version of a chapter submitted to be included in
the book "Decoding Cultural Heritage: a critical dissection and taxonomy of
human creativity through digital tools", to be published by Springer Nature.
The chapter is currently undergoing peer review for potential inclusion in
the boo
The International Image Interoperability Framework (IIIF): raising awareness of the user benefits for scholarly editions
The International Image Interoperability Framework (IIIF), an initiative born in 2011, defines a set of common application programming interfaces (APIs) to retrieve, display, manipulate, compare, and annotate digitised and born-digital images. Upon implementation, these technical specifications have offered institutions and end users alike new possibilities. In Switzerland, only a handful of organizations and projects have collaborated with the IIIF community. For instance, e-codices, the Virtual Manuscript Library, implemented in December 2014 the two core IIIF APIs (Image API and Presentation API). Since then, no other Swiss collection has fully complied with the IIIF specifications to make true interoperability possible. The NIE-INE project, overseen by the University of Basel and funded by Swissuniversities, has aimed to build a national platform for scientific editions. There is a shared rationale between NIE-INE and IIIF who both advocate flexible and consistent technical architecture as well as providing high-quality user experience (UX) in their content delivery. Remote and in-person usability tests were conducted on the Universal Viewer (UV) and Mirador, two IIIF-compliant image viewers deployed by many IIIF implementers, in order to assess their satisfaction and efficiency as well as their perceived usability. NIE-INE was the target audience of the usability testing with a view to evaluating how scholarly research and the wider scientific community could benefit from leveraging IIIF-compliant technology. To conclude this bachelor’s thesis, a set of recommendations, based on the usability testing results and throughout this assignment, was drawn for the developing teams of both viewers, the IIIF community and the NIE-INE team members
An MPEG-7 scheme for semantic content modelling and filtering of digital video
Abstract Part 5 of the MPEG-7 standard specifies Multimedia Description Schemes (MDS); that is, the format multimedia content models should conform to in order to ensure interoperability across multiple platforms and applications. However, the standard does not specify how the content or the associated model may be filtered. This paper proposes an MPEG-7 scheme which can be deployed for digital video content modelling and filtering. The proposed scheme, COSMOS-7, produces rich and multi-faceted semantic content models and supports a content-based filtering approach that only analyses content relating directly to the preferred content requirements of the user. We present details of the scheme, front-end systems used for content modelling and filtering and experiences with a number of users
- …