12,443 research outputs found

    An archiving model for a hierarchical information storage environment

    Get PDF
    Cataloged from PDF version of article.We consider an archiving model for a database consisting of secondary and tertiary storage devices in which the query rate for a record declines as it ages. We propose a `dynamic' archiving policy based on the number of records and the age of the records in the secondary device. We analyze the cases when the number of new records inserted in the system over time are either constant or follow a Poisson process. For both scenarios, we characterize the properties of the policy parameters and provide optimization results when the objective is to minimize the average record retrieval times. Furthermore, we propose a simple heuristic method for obtaining near-optimal policies in large databases when the record query rate declines exponentially with time. The e ectiveness of the heuristic is tested via a numerical experiment. Finally, we examine the behavior of performance measures such as the average record retrieval time and the hit rate as system parameters are varied. Ă“ 2000 Elsevier Science B.V. All rights reserved

    LAGOVirtual: A Collaborative Environment for the Large Aperture GRB Observatory

    Full text link
    We present the LAGOVirtual Project: an ongoing project to develop platform to collaborate in the Large Aperture GRB Observatory (LAGO). This continental-wide observatory is devised to detect high energy (around 100 GeV) component of Gamma Ray Bursts, by using the single particle technique in arrays of Water Cherenkov Detectors (WCD) at high mountain sites (Chacaltaya, Bolivia, 5300 m a.s.l., Pico Espejo, Venezuela, 4750 m a.s.l., Sierra Negra, Mexico, 4650 m a.s.l). This platform will allow LAGO collaboration to share data, and computer resources through its different sites. This environment has the possibility to generate synthetic data by simulating the showers through AIRES application and to store/preserve distributed data files collected by the WCD at the LAGO sites. The present article concerns the implementation of a prototype of LAGO-DR adapting DSpace, with a hierarchical structure (i.e. country, institution, followed by collections that contain the metadata and data files), for the captured/simulated data. This structure was generated by using the community, sub-community, collection, item model; available at the DSpace software. Each member institution-country of the project has the appropriate permissions on the system to publish information (descriptive metadata and associated data files). The platform can also associate multiple files to each item of data (data from the instruments, graphics, postprocessed-data, etc.).Comment: Second EELA-2 Conference Choroni, Venezuela, November 25th to 27th 200

    BlogForever: D3.1 Preservation Strategy Report

    Get PDF
    This report describes preservation planning approaches and strategies recommended by the BlogForever project as a core component of a weblog repository design. More specifically, we start by discussing why we would want to preserve weblogs in the first place and what it is exactly that we are trying to preserve. We further present a review of past and present work and highlight why current practices in web archiving do not address the needs of weblog preservation adequately. We make three distinctive contributions in this volume: a) we propose transferable practical workflows for applying a combination of established metadata and repository standards in developing a weblog repository, b) we provide an automated approach to identifying significant properties of weblog content that uses the notion of communities and how this affects previous strategies, c) we propose a sustainability plan that draws upon community knowledge through innovative repository design

    Measuring usability for application software using the quality in use integration measurement model

    Get PDF
    User interfaces of application software are designed to make user interaction as efficient and as simple as possible. Market accessibility of any application software is determined by the usability of its user interfaces. A poorly designed user interface will have little value no matter how powerful the program is. Thus, it is significantly important to measure usability during the system development lifecycle in order to avoid user disappointment. Various methods and standards that help measure usability have been developed. However, these methods define usability inconsistently, which makes software engineers hesitant in implementing these methods or standards. The Quality in Use Integrated Measurement (QUIM) model is a consolidated approach for measuring usability through 10 factors, 26 criteria, and 127 metrics. It decomposes usability into factors, criteria, and metrics, and it is a hierarchical model that helps developers with no or little background of usability metrics. Among 127 metrics of QUIM, essential efficiency (EE) is the most specific metric used to measure the usability of user interfaces through an equation. This study involves a comparative analysis between three case studies that use the QUIM model to measure usability in terms of EE for three case studies: (1) Public University Registration System, (2) Restaurant Menu Ordering System, and (3) ATM system. A comparison is made based on the percentage of EE for each element of the use cases in each use case diagram. The results obtained revealed that the user interface design for Restaurant Menu Ordering System scored the highest percentage of EE, thus proving to be the most user-friendly application software among its counterparts

    Proceedings of the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications

    Get PDF
    The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Distributed Computing Grid Experiences in CMS

    Get PDF
    The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data-taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure and the current development of the CMS analysis system
    • …
    corecore