17 research outputs found

    Goddard Conference on Mass Storage Systems and Technologies, volume 2

    Get PDF
    Papers and viewgraphs from the conference are presented. Discussion topics include the IEEE Mass Storage System Reference Model, data archiving standards, high-performance storage devices, magnetic and magneto-optic storage systems, magnetic and optical recording technologies, high-performance helical scan recording systems, and low end helical scan tape drives. Additional discussion topics addressed the evolution of the identifiable unit for processing (file, granule, data set, or some similar object) as data ingestion rates increase dramatically, and the present state of the art in mass storage technology

    Mission and data operations IBM 360 user's guide

    Get PDF
    The M and DO computer systems are introduced and supplemented. The hardware and software status is discussed, along with standard processors and user libraries. Data management techniques are presented, as well as machine independence, debugging facilities, and overlay considerations

    Two relational DBMS: a comparison

    Get PDF
    Call number: LD2668 .R4CMSC 1987 G37Master of ScienceComputing and Information Science

    z/OS Internet Integration

    Get PDF

    A shared-disk parallel cluster file system

    Get PDF
    Dissertação apresentada para obtenção do Grau de Doutor em Informática Pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaToday, clusters are the de facto cost effective platform both for high performance computing (HPC) as well as IT environments. HPC and IT are quite different environments and differences include, among others, their choices on file systems and storage: HPC favours parallel file systems geared towards maximum I/O bandwidth, but which are not fully POSIX-compliant and were devised to run on top of (fault prone) partitioned storage; conversely, IT data centres favour both external disk arrays (to provide highly available storage) and POSIX compliant file systems, (either general purpose or shared-disk cluster file systems, CFSs). These specialised file systems do perform very well in their target environments provided that applications do not require some lateral features, e.g., no file locking on parallel file systems, and no high performance writes over cluster-wide shared files on CFSs. In brief, we can say that none of the above approaches solves the problem of providing high levels of reliability and performance to both worlds. Our pCFS proposal makes a contribution to change this situation: the rationale is to take advantage on the best of both – the reliability of cluster file systems and the high performance of parallel file systems. We don’t claim to provide the absolute best of each, but we aim at full POSIX compliance, a rich feature set, and levels of reliability and performance good enough for broad usage – e.g., traditional as well as HPC applications, support of clustered DBMS engines that may run over regular files, and video streaming. pCFS’ main ideas include: · Cooperative caching, a technique that has been used in file systems for distributed disks but, as far as we know, was never used either in SAN based cluster file systems or in parallel file systems. As a result, pCFS may use all infrastructures (LAN and SAN) to move data. · Fine-grain locking, whereby processes running across distinct nodes may define nonoverlapping byte-range regions in a file (instead of the whole file) and access them in parallel, reading and writing over those regions at the infrastructure’s full speed (provided that no major metadata changes are required). A prototype was built on top of GFS (a Red Hat shared disk CFS): GFS’ kernel code was slightly modified, and two kernel modules and a user-level daemon were added. In the prototype, fine grain locking is fully implemented and a cluster-wide coherent cache is maintained through data (page fragments) movement over the LAN. Our benchmarks for non-overlapping writers over a single file shared among processes running on different nodes show that pCFS’ bandwidth is 2 times greater than NFS’ while being comparable to that of the Parallel Virtual File System (PVFS), both requiring about 10 times more CPU. And pCFS’ bandwidth also surpasses GFS’ (600 times for small record sizes, e.g., 4 KB, decreasing down to 2 times for large record sizes, e.g., 4 MB), at about the same CPU usage.Lusitania, Companhia de Seguros S.A, Programa IBM Shared University Research (SUR

    Goddard Conference on Mass Storage Systems and Technologies, Volume 1

    Get PDF
    Copies of nearly all of the technical papers and viewgraphs presented at the Goddard Conference on Mass Storage Systems and Technologies held in Sep. 1992 are included. The conference served as an informational exchange forum for topics primarily relating to the ingestion and management of massive amounts of data and the attendant problems (data ingestion rates now approach the order of terabytes per day). Discussion topics include the IEEE Mass Storage System Reference Model, data archiving standards, high-performance storage devices, magnetic and magneto-optic storage systems, magnetic and optical recording technologies, high-performance helical scan recording systems, and low end helical scan tape drives. Additional topics addressed the evolution of the identifiable unit for processing purposes as data ingestion rates increase dramatically, and the present state of the art in mass storage technology

    Sixth Goddard Conference on Mass Storage Systems and Technologies Held in Cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems

    Get PDF
    This document contains copies of those technical papers received in time for publication prior to the Sixth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems at the University of Maryland-University College Inn and Conference Center March 23-26, 1998. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, tape optimization, new technology, performance, standards, site reports, vendor solutions. Tutorials will be available on shared file systems, file system backups, data mining, and the dynamics of obsolescence

    Integrating legacy mainframe systems: architectural issues and solutions

    Get PDF
    For more than 30 years, mainframe computers have been the backbone of computing systems throughout the world. Even today it is estimated that some 80% of the worlds' data is held on such machines. However, new business requirements and pressure from evolving technologies, such as the Internet is pushing these existing systems to their limits and they are reaching breaking point. The Banking and Financial Sectors in particular have been relying on mainframes for the longest time to do their business and as a result it is they that feel these pressures the most. In recent years there have been various solutions for enabling a re-engineering of these legacy systems. It quickly became clear that to completely rewrite them was not possible so various integration strategies emerged. Out of these new integration strategies, the CORBA standard by the Object Management Group emerged as the strongest, providing a standards based solution that enabled the mainframe applications become a peer in a distributed computing environment. However, the requirements did not stop there. The mainframe systems were reliable, secure, scalable and fast, so any integration strategy had to ensure that the new distributed systems did not lose any of these benefits. Various patterns or general solutions to the problem of meeting these requirements have arisen and this research looks at applying some of these patterns to mainframe based CORBA applications. The purpose of this research is to examine some of the issues involved with making mainframebased legacy applications inter-operate with newer Object Oriented Technologies

    Parallel replication for distributed video-on-demand systems.

    Get PDF
    Lie, Wai-Kwok Peter.Thesis (M.Phil.)--Chinese University of Hong Kong, 1997.Includes bibliographical references (leaves 79-83).Abstract --- p.iAcknowledgments --- p.iiChapter 1 --- Introduction --- p.1Chapter 2 --- Background & Related Work --- p.5Chapter 2.1 --- Early Work on Multimedia Servers --- p.6Chapter 2.2 --- Compression of Multimedia Data --- p.6Chapter 2.3 --- Multimedia File Systems --- p.7Chapter 2.4 --- Scheduling Support for Multimedia Systems --- p.8Chapter 2.5 --- Inter-media Synchronization --- p.9Chapter 2.6 --- Related Work on Replication in VOD Systems --- p.9Chapter 3 --- System Model --- p.12Chapter 4 --- Replication Methodology --- p.15Chapter 4.1 --- Replication Triggering Policy --- p.16Chapter 4.2 --- Source & Target Nodes Selection Policies --- p.17Chapter 4.3 --- Replication Policies --- p.18Chapter 4.3.1 --- Policy 1: Injected Sequential Replication --- p.20Chapter 4.3.2 --- Policy 2: Piggybacked Sequential Replication --- p.22Chapter 4.3.3 --- Policy 3: Injected Parallel Replication --- p.25Chapter 4.3.4 --- Policy 4: Piggybacked Parallel Replication --- p.28Chapter 4.3.5 --- Policy 5: Injected & Piggybacked Parallel Replication --- p.34Chapter 4.3.6 --- Policy 6: Multi-Source Injected & Piggybacked Parallel Replication --- p.36Chapter 4.4 --- Dereplication Policy --- p.37Chapter 5 --- Distributed Architecture for VOD Server --- p.39Chapter 5.1 --- Server Node --- p.40Chapter 5.2 --- Movie Manager --- p.42Chapter 5.3 --- Metadata Manager --- p.42Chapter 5.4 --- Protocols for Distributed VOD Architecture --- p.43Chapter 5.4.1 --- Protocol for Servicing New Customers --- p.43Chapter 5.4.2 --- Protocol for Servicing Existing Customers --- p.45Chapter 5.4.3 --- Protocol for Single/Multi-Source Injected & Parallel Replication --- p.46Chapter 5.4.4 --- Protocol for Dereplication --- p.48Chapter 5.5 --- Failure Handling --- p.49Chapter 5.5.1 --- Handling of Server Node Failures --- p.50Chapter 5.5.2 --- Handling of Movie Manager Failures --- p.52Chapter 6 --- Results --- p.55Chapter 6.1 --- Performance Metric --- p.56Chapter 6.2 --- Simulation Environment --- p.58Chapter 6.3 --- Results of Experiments without Dereplication --- p.59Chapter 6.3.1 --- Comparison of Different Replication Policies --- p.60Chapter 6.3.2 --- Effect of Early Acceptance/Migration --- p.61Chapter 6.3.3 --- Answer to the Resources Consumption Tradeoff issue --- p.62Chapter 6.3.4 --- Effect of Varying Movie Popularity Skewness --- p.64Chapter 6.3.5 --- Effect of Varying Replication Threshold --- p.64Chapter 6.3.6 --- Comparison of Different Target Node Selection Policies --- p.65Chapter 6.4 --- Overall Impact of Dynamic Replication --- p.66Chapter 7 --- Comparison with BSR-based Policy --- p.71Chapter 8 --- Conclusions --- p.75Chapter 8.1 --- Summary --- p.75Chapter 8.2 --- Future Research Directions --- p.76Bibliography --- p.7
    corecore