4,645 research outputs found
Economics and Engineering for Preserving Digital Content
Progress towards practical long-term preservation seems to be stalled. Preservationists cannot afford specially developed technology, but must exploit what is created for the marketplace.
Economic and technical facts suggest that most preservation ork should be shifted from repository institutions to information producers and consumers. Prior publications describe solutions for all known conceptual challenges of preserving a single digital object, but do not deal with software development or scaling to large collections. Much of the document handling software needed is available. It has, however, not yet been selected, adapted, integrated, or
deployed for digital preservation. The daily tools of both information producers and information consumers can be extended to embed preservation packaging without much burdening these users.
We describe a practical strategy for detailed design and implementation. Document handling is intrinsically complicated because of human sensitivity to communication nuances. Our engineering section therefore starts by discussing how project managers can master the many pertinent details.
Education alignment
This essay reviews recent developments in embedding data
management and curation skills into information technology,
library and information science, and research-based
postgraduate courses in various national contexts. The essay
also investigates means of joining up formal education with
professional development training opportunities more
coherently. The potential for using professional internships as a
means of improving communication and understanding between
disciplines is also explored. A key aim of this essay is to identify
what level of complementarity is needed across various
disciplines to most effectively and efficiently support the entire
data curation lifecycle
ネットワーク情報環境におけるメタデータの長期利用性向上のためのメタデータスキーマの来歴記述に関する研究
筑波大学 (University of Tsukuba)201
DePICT : a conceptual model for digital preservation
Digital Preservation addresses a significant threat to our cultural and economic foundation: the loss of access to valuable and, sometimes, unique information that is captured in digital form through obsolescence, deterioration or loss of information of how to access the contents. Digital Preservation has been defined as “The series of managed activities necessary to ensure continued access to digital materials for as long as necessary” (Jones, Beagrie, 2001/2008). This thesis develops a conceptual model of the core concepts and constraints that appear in digital preservation - DePICT (Digital PreservatIon ConceptualisaTion). This includes a conceptual model of the digital preservation domain, a top-level vocabulary for the concepts in the model, an in-depth analysis of the role of digital object properties, characteristics, and the constraints that guide digital preservation processes, and of how properties, characteristics and constraints affect the interaction of digital preservation services. In addition, it presents a machine-interpretable XML representation of this conceptual model to support automated digital preservation tools. Previous preservation models have focused on preserving technical properties of digital files. Such an approach limits the choices of preservation actions and does not fully reflect preservation activities in practice. Organisations consider properties that go beyond technical aspects and that encompass a wide range of factors that influence and guide preservation processes, including organisational, legal, and financial ones. Consequently, it is necessary to be able to handle ‘digital’ objects in a very wide sense, including abstract objects, such as intellectual entities and collections, in addition to the files and sets of files that create renditions of logical objects that are normally considered. In addition, we find that not only the digital objects' properties, but also the properties of the environments in which they exist, guide digital preservation processes. Furthermore, organisations use risk-based analysis for their preservation strategies, policies and preservation planning. They combine information about risks with an understanding of actions that are expected to mitigate the risks. Risk and action specifications can be dependent on properties of the actions, as well as on properties of objects or environments which form the input and output of those actions. The model presented here supports this view explicitly. It links risks with the actions that mitigate them and expresses them in stakeholder specific constraints. Risk, actions and constraints are top-level entities in this model. In addition, digital objects and environments are top-level entities on an equal level. Models that do not have this property limit the choice of preservation actions to ones that transform a file in order to mitigate a risk. Establishing environments as top-level entities enables us to treat risks to objects, environments, or a combination of both. The DePICT model is the first conceptual model in the Digital Preservation domain that supports a comprehensive, whole life-cycle approach for dynamic, interacting preservation processes, rather than taking the customary and more limited view that is concerned with the management of digital objects once they are stored in a long-term repository.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
The State of the Art in Cartograms
Cartograms combine statistical and geographical information in thematic maps,
where areas of geographical regions (e.g., countries, states) are scaled in
proportion to some statistic (e.g., population, income). Cartograms make it
possible to gain insight into patterns and trends in the world around us and
have been very popular visualizations for geo-referenced data for over a
century. This work surveys cartogram research in visualization, cartography and
geometry, covering a broad spectrum of different cartogram types: from the
traditional rectangular and table cartograms, to Dorling and diffusion
cartograms. A particular focus is the study of the major cartogram dimensions:
statistical accuracy, geographical accuracy, and topological accuracy. We
review the history of cartograms, describe the algorithms for generating them,
and consider task taxonomies. We also review quantitative and qualitative
evaluations, and we use these to arrive at design guidelines and research
challenges
Proceedings of the 12th International Conference on Digital Preservation
The 12th International Conference on Digital Preservation (iPRES) was held on November 2-6, 2015 in Chapel Hill, North Carolina, USA. There were 327 delegates from 22 countries. The program included 12 long papers, 15 short papers, 33 posters, 3 demos, 6 workshops, 3 tutorials and 5 panels, as well as several interactive sessions and a Digital Preservation Showcase
From the oceans to the cloud: Opportunities and challenges for data, models, computation and workflows.
© The Author(s), 2019. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Vance, T. C., Wengren, M., Burger, E., Hernandez, D., Kearns, T., Medina-Lopez, E., Merati, N., O'Brien, K., O'Neil, J., Potemrag, J. T., Signell, R. P., & Wilcox, K. From the oceans to the cloud: Opportunities and challenges for data, models, computation and workflows. Frontiers in Marine Science, 6(211), (2019), doi:10.3389/fmars.2019.00211.Advances in ocean observations and models mean increasing flows of data. Integrating observations between disciplines over spatial scales from regional to global presents challenges. Running ocean models and managing the results is computationally demanding. The rise of cloud computing presents an opportunity to rethink traditional approaches. This includes developing shared data processing workflows utilizing common, adaptable software to handle data ingest and storage, and an associated framework to manage and execute downstream modeling. Working in the cloud presents challenges: migration of legacy technologies and processes, cloud-to-cloud interoperability, and the translation of legislative and bureaucratic requirements for “on-premises” systems to the cloud. To respond to the scientific and societal needs of a fit-for-purpose ocean observing system, and to maximize the benefits of more integrated observing, research on utilizing cloud infrastructures for sharing data and models is underway. Cloud platforms and the services/APIs they provide offer new ways for scientists to observe and predict the ocean’s state. High-performance mass storage of observational data, coupled with on-demand computing to run model simulations in close proximity to the data, tools to manage workflows, and a framework to share and collaborate, enables a more flexible and adaptable observation and prediction computing architecture. Model outputs are stored in the cloud and researchers either download subsets for their interest/area or feed them into their own simulations without leaving the cloud. Expanded storage and computing capabilities make it easier to create, analyze, and distribute products derived from long-term datasets. In this paper, we provide an introduction to cloud computing, describe current uses of the cloud for management and analysis of observational data and model results, and describe workflows for running models and streaming observational data. We discuss topics that must be considered when moving to the cloud: costs, security, and organizational limitations on cloud use. Future uses of the cloud via computational sandboxes and the practicalities and considerations of using the cloud to archive data are explored. We also consider the ways in which the human elements of ocean observations are changing – the rise of a generation of researchers whose observations are likely to be made remotely rather than hands on – and how their expectations and needs drive research towards the cloud. In conclusion, visions of a future where cloud computing is ubiquitous are discussed.This is PMEL contribution 4873
Proceedings of the 12th International Conference on Digital Preservation
The 12th International Conference on Digital Preservation (iPRES) was held on November 2-6, 2015 in Chapel Hill, North Carolina, USA. There were 327 delegates from 22 countries. The program included 12 long papers, 15 short papers, 33 posters, 3 demos, 6 workshops, 3 tutorials and 5 panels, as well as several interactive sessions and a Digital Preservation Showcase
- …