32 research outputs found

    Digitale bestanden overzichtelijk ordenen

    Get PDF
    UB publicatie

    Increasing information feed in the process of structural steel design

    Get PDF
    Research initiatives throughout history have shown how a designer typically makes associations and references to a vast amount of knowledge based on experiences to make decisions. With the increasing usage of information systems in our everyday lives, one might imagine an information system that provides designers access to the ‘architectural memories’ of other architectural designers during the design process, in addition to their own physical architectural memory. In this paper, we discuss how the increased adoption of semantic web technologies might advance this idea. We investigate to what extent information can be described with these technologies in the context of structural steel design. This investigation indicates significant possibilities regarding information reuse in the process of structural steel design and, by extent, in other design contexts as well. However, important obstacles and question remarks can still be outlined as well

    A Multi-Relational Network to Support the Scholarly Communication Process

    Full text link
    The general pupose of the scholarly communication process is to support the creation and dissemination of ideas within the scientific community. At a finer granularity, there exists multiple stages which, when confronted by a member of the community, have different requirements and therefore different solutions. In order to take a researcher's idea from an initial inspiration to a community resource, the scholarly communication infrastructure may be required to 1) provide a scientist initial seed ideas; 2) form a team of well suited collaborators; 3) located the most appropriate venue to publish the formalized idea; 4) determine the most appropriate peers to review the manuscript; and 5) disseminate the end product to the most interested members of the community. Through the various delinieations of this process, the requirements of each stage are tied soley to the multi-functional resources of the community: its researchers, its journals, and its manuscritps. It is within the collection of these resources and their inherent relationships that the solutions to scholarly communication are to be found. This paper describes an associative network composed of multiple scholarly artifacts that can be used as a medium for supporting the scholarly communication process.Comment: keywords: digital libraries and scholarly communicatio

    Archive Ingest and Handling Test

    Get PDF
    The Archive Ingest and Handling Test (AIHT) was a Library of Congress (LC) sponsored research project administered by Information Systems and Support Inc. (ISS). The project featured five participants: Old Dominion University Computer Science Department; Harvard University Library; Johns Hopkins University Library; Stanford University Library; Library of Congress. All five participants received identical disk drives containing copies of the 911.gmu.edu web site, a collection of 9/11 materials maintained by George Mason University (GMU). The purpose of the AIHT experiment was to perform archival forensics to determine the nature of the archive, ingest it, simulate at least one of the file formats going out of scope, export a copy of the archive, and import another version of the archive. The AIHT is further described in Shirky (2005)

    The multi-faceted use of the OAI-PMH in the LANL Repository

    Get PDF
    This paper focuses on the multifaceted use of the OAI-PMH in a repository architecture designed to store digital assets at the Research Library of the Los Alamos National Laboratory (LANL), and to make the stored assets available in a uniform way to various downstream applications. In the architecture, the MPEG-21 Digital Item Declaration Language is used as the XML-based format to represent complex digital objects. Upon ingestion, these objects are stored in a multitude of autonomous OAI-PMH repositories. An OAI-PMH compliant Repository Index keeps track of the creation and location of all those repositories, whereas an Identifier Resolver keeps track of the location of individual objects. An OAI-PMH Federator is introduced as a single-point-of-access to downstream harvesters. It hides the complexity of the environment to those harvesters, and allows them to obtain transformations of stored objects. While the proposed architecture is described in the context of the LANL library, the paper will also touch on its more general applicability

    Repository Replication Using NNTP and SMTP

    Full text link
    We present the results of a feasibility study using shared, existing, network-accessible infrastructure for repository replication. We investigate how dissemination of repository contents can be ``piggybacked'' on top of existing email and Usenet traffic. Long-term persistence of the replicated repository may be achieved thanks to current policies and procedures which ensure that mail messages and news posts are retrievable for evidentiary and other legal purposes for many years after the creation date. While the preservation issues of migration and emulation are not addressed with this approach, it does provide a simple method of refreshing content with unknown partners.Comment: This revised version has 24 figures and a more detailed discussion of the experiments conducted by u

    MIT's CWSpace project: packaging metadata for archiving educational content in DSpace

    Get PDF
    This paper describes work in progress on the research project CWSpace, sponsored by the MIT and Microsoft Research iCampus program, to investigate the metadata standards and protocols required to archive the course materials found in MIT’s OpenCourseWare (OCW) into MIT’s institutional repository DSpace. The project goal is “to harvest and digitally archive OCW learning objects, and make them available to learning management systems by using Web Services interfaces on top of DSpace.” The larger vision is one of complex digital objects (CDOs) successfully interoperating amongst MIT’s various learning management systems and learning object repositories, providing archival preservation and persistent identifiers for educational materials, as well as providing the means to richer shared discovery and dissemination mechanisms for those materials. The paper describes work to date on the analysis of the content packaging metadata standards METS (Metadata Encoding and Transmission Standard) and especially IMS-CP (IMS Global Learning Consortium, Content Packaging), and issues faced in the development and use of profiles, extensions, and external schema for these standards. Also addressed are the anticipated issues in the preparation of transformations from one standard to another, noting the importance of well-defined profiles to making that feasible. The paper also briefly touches on the DSpace development work that will be undertaken to provide new import and export functionalities, as the technical specifications for these will largely be determined by the packaging metadata profiles that are developed. Note that the degree of interoperability considered herein might be referred to as “first level,” as this paper addresses the packaging metadata only, which in turn is the carrier or envelope for the descriptive (and other kinds of) metadata. It will no doubt be an even more challenging task to ensure interoperability at what might be referred to as the “second level,” that of semantic metadata.MIT iCampu

    Scientific Models: A User-oriented Approach to the Integration of Scientific Data and Digital Libraries

    Get PDF
    Many scientific communities are struggling with the challenge of how to manage the terabytes of data they are producing, often on a daily basis. Scientific models are the primary method for representing and encapsulating expert knowledge in many disciplines. Scientific models could also provide a mechanism: for publishing and sharing scientific results; for teaching complex scientific concepts; and for the selective archival, curation and preservation of scientific data. As such, they also provide a bridge for collaboration between Digital Libraries and eScience. In this paper I describe research being undertaken within the FUSION project at the University of Queensland to enable scientists to construct, publish and manage scientific model packages that encapsulate and relate the raw data to its associated contextual and provenance metadata, processing steps, derived information and publications. This work involves extending tools and services that have come out of the Digital Libraries domain to support e-Science requirements
    corecore