28 research outputs found

    Use of HSM with Relational Databases

    Get PDF
    Hierarchical storage management (HSM) systems have evolved to become a critical component of large information storage operations. They are built on the concept of using a hierarchy of storage technologies to provide a balance in performance and cost. In general, they migrate data from expensive high performance storage to inexpensive low performance storage based on frequency of use. The predominant usage characteristic is that frequency of use is reduced with age and in most cases quite rapidly. The result is that HSM provides an economical means for managing and storing massive volumes of data. Inherent in HSM systems is system managed storage, where the system performs most of the work with minimum operations personnel involvement. This automation is generally extended to include: backup and recovery, data duplexing to provide high availability, and catastrophic recovery through use of off-site storage

    Magnetic tape

    Get PDF
    The move to visualization and image processing in data systems is increasing the demand for larger and faster mass storage systems. The technology of choice is magnetic tape. This paper briefly reviews the technology past, present, and projected. A case is made for standards and the value of the standards to users

    The IEEE mass storage system reference model

    Get PDF
    The IEEE Reference Model for Mass Storage Systems provides a basis for the develop­ment of standards for storage systems. The model identifies the high level abstractions that underlie modern storage systems. The model itself does not attempt to provide implementation specifications. Its main purpose is to permit the development of indi­vidual standards within a common framework. High Energy Physics has consistently been on the leading edge of technology and Mass Storage is no exception. This paper describes the IEEE MSS Reference model in the HEP context and examines how it could be used to help solve the data management problems of HEP. (Originally published in CERN Yellow Report 94-06)These are the notes from a series of lectures given at the 1993 CERN School of Computing. They have been extracted from the scanned PDF document, converted to MS Word using a free online tool and then saved as PDF. No attempt has been made to correct typographical or other errors in the original text

    NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications, volume 1

    Get PDF
    Papers and viewgraphs from the conference are presented. This conference served as a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disks and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe, among other things, integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    High volume data storage architecture analysis

    Get PDF
    A High Volume Data Storage Architecture Analysis was conducted. The results, presented in this report, will be applied to problems of high volume data requirements such as those anticipated for the Space Station Control Center. High volume data storage systems at several different sites were analyzed for archive capacity, storage hierarchy and migration philosophy, and retrieval capabilities. Proposed architectures were solicited from the sites selected for in-depth analysis. Model architectures for a hypothetical data archiving system, for a high speed file server, and for high volume data storage are attached

    Stakeholder Status And The Effects On Product Usability

    Get PDF
    Organizations dedicate considerable resources to developing software tools and often utilize usability studies to improve their products.  Unfortunately, usability studies can be costly and some companies are using internal employees as participants in an effort to improve their product while controlling costs.  However, the effects of using internal employees on usability study requirement identification and implementation is not yet understood.This field research was conducted at a Fortune 100 Company and investigates the relationship between participants’ organizational status and the implementation of usability requirements they identify for an Enterprise Storage Resource Management product.  A theoretical model based on status characteristics theory is proposed.  Regression analysis suggests that organizational status is a significant indicator of the likelihood of usability study requirement implementation.  Organizational status of an individual should be carefully considered when soliciting product feedback

    Building a COTS archive for satellite data

    Get PDF
    The goal of the NOAA/NESDIS Active Archive was to provide a method of access to an online archive of satellite data. The archive had to manage and store the data, let users interrogate the archive, and allow users to retrieve data from the archive. Practical issues of the system design such as implementation time, cost and operational support were examined in addition to the technical issues. There was a fixed window of opportunity to create an operational system, along with budget and staffing constraints. Therefore, the technical solution had to be designed and implemented subject to constraint imposed by the practical issues. The NOAA/NESDIS Active Archive came online in July of 1994, meeting all of its original objectives

    Database machines in support of very large databases

    Get PDF
    Software database management systems were developed in response to the needs of early data processing applications. Database machine research developed as a result of certain performance deficiencies of these software systems. This thesis discusses the history of database machines designed to improve the performance of database processing and focuses primarily on the Teradata DBC/1012, the only successfully marketed database machine that supports very large databases today. Also reviewed is the response of IBM to the performance needs of its database customers; this response has been in terms of improvements in both software and hardware support for database processing. In conclusion, an analysis is made of the future of database machines, in particular the DBC/1012, in light of recent IBM enhancements and its immense customer base
    corecore