29 research outputs found

    Proceedings of the NSSDC Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications

    Get PDF
    The proceedings of the National Space Science Data Center Conference on Mass Storage Systems and Technologies for Space and Earth Science Applications held July 23 through 25, 1991 at the NASA/Goddard Space Flight Center are presented. The program includes a keynote address, invited technical papers, and selected technical presentations to provide a broad forum for the discussion of a number of important issues in the field of mass storage systems. Topics include magnetic disk and tape technologies, optical disk and tape, software storage and file management systems, and experiences with the use of a large, distributed storage system. The technical presentations describe integrated mass storage systems that are expected to be available commercially. Also included is a series of presentations from Federal Government organizations and research institutions covering their mass storage requirements for the 1990's

    Fourth NASA Goddard Conference on Mass Storage Systems and Technologies

    Get PDF
    This report contains copies of all those technical papers received in time for publication just prior to the Fourth Goddard Conference on Mass Storage and Technologies, held March 28-30, 1995, at the University of Maryland, University College Conference Center, in College Park, Maryland. This series of conferences continues to serve as a unique medium for the exchange of information on topics relating to the ingestion and management of substantial amounts of data and the attendant problems involved. This year's discussion topics include new storage technology, stability of recorded media, performance studies, storage system solutions, the National Information infrastructure (Infobahn), the future for storage technology, and lessons learned from various projects. There also will be an update on the IEEE Mass Storage System Reference Model Version 5, on which the final vote was taken in July 1994

    High Availability and Scalability of Mainframe Environments using System z and z/OS as example

    Get PDF
    Mainframe computers are the backbone of industrial and commercial computing, hosting the most relevant and critical data of businesses. One of the most important mainframe environments is IBM System z with the operating system z/OS. This book introduces mainframe technology of System z and z/OS with respect to high availability and scalability. It highlights their presence on different levels within the hardware and software stack to satisfy the needs for large IT organizations

    A comparison of management of virtual machines with z/VM and ESX Server

    Get PDF
    Virtualization and virtual machines are becoming more and more important for businesses. By consolidating many servers to run as virtual machine on a single host companies can save considerable amounts on money. The savings come from the better utilization of the hardware, and by having less hardware that needs maintenance. There are several products for virtualization, and different methods to acheive the virtualization. This thesis will focus on comparing VMware ESX Server and z/VM. These products are quite different and run on different hardware. The primary focus of the comparison will be on management of the two different products.Master i nettverks- og systemadministrasjo

    Queuing network models and performance analysis of computer systems

    Get PDF

    A shared-disk parallel cluster file system

    Get PDF
    Dissertação apresentada para obtenção do Grau de Doutor em Informática Pela Universidade Nova de Lisboa, Faculdade de Ciências e TecnologiaToday, clusters are the de facto cost effective platform both for high performance computing (HPC) as well as IT environments. HPC and IT are quite different environments and differences include, among others, their choices on file systems and storage: HPC favours parallel file systems geared towards maximum I/O bandwidth, but which are not fully POSIX-compliant and were devised to run on top of (fault prone) partitioned storage; conversely, IT data centres favour both external disk arrays (to provide highly available storage) and POSIX compliant file systems, (either general purpose or shared-disk cluster file systems, CFSs). These specialised file systems do perform very well in their target environments provided that applications do not require some lateral features, e.g., no file locking on parallel file systems, and no high performance writes over cluster-wide shared files on CFSs. In brief, we can say that none of the above approaches solves the problem of providing high levels of reliability and performance to both worlds. Our pCFS proposal makes a contribution to change this situation: the rationale is to take advantage on the best of both – the reliability of cluster file systems and the high performance of parallel file systems. We don’t claim to provide the absolute best of each, but we aim at full POSIX compliance, a rich feature set, and levels of reliability and performance good enough for broad usage – e.g., traditional as well as HPC applications, support of clustered DBMS engines that may run over regular files, and video streaming. pCFS’ main ideas include: · Cooperative caching, a technique that has been used in file systems for distributed disks but, as far as we know, was never used either in SAN based cluster file systems or in parallel file systems. As a result, pCFS may use all infrastructures (LAN and SAN) to move data. · Fine-grain locking, whereby processes running across distinct nodes may define nonoverlapping byte-range regions in a file (instead of the whole file) and access them in parallel, reading and writing over those regions at the infrastructure’s full speed (provided that no major metadata changes are required). A prototype was built on top of GFS (a Red Hat shared disk CFS): GFS’ kernel code was slightly modified, and two kernel modules and a user-level daemon were added. In the prototype, fine grain locking is fully implemented and a cluster-wide coherent cache is maintained through data (page fragments) movement over the LAN. Our benchmarks for non-overlapping writers over a single file shared among processes running on different nodes show that pCFS’ bandwidth is 2 times greater than NFS’ while being comparable to that of the Parallel Virtual File System (PVFS), both requiring about 10 times more CPU. And pCFS’ bandwidth also surpasses GFS’ (600 times for small record sizes, e.g., 4 KB, decreasing down to 2 times for large record sizes, e.g., 4 MB), at about the same CPU usage.Lusitania, Companhia de Seguros S.A, Programa IBM Shared University Research (SUR

    Cyber- Physical Robustness Enhancement Strategies for Demand Side Energy Systems

    Full text link
    An integrated Cyber-Physical System (CPS) system realizes the two-way communication between end-users and power generation in which customers are able to actively re-shaped their consumption profiles to facilitate the energy efficiency of the grid. However, large-scale implementations of distributed assets and advanced communication infrastructures also increase the risks of grid operation. This thesis aims to enhance the robustness of the entire demand-side system in a cyber-physical environment and develop comprehensive strategies about outage energy management (i.e., community-level scheduling and appliance-level energy management), communications infrastructure development, and cybersecurity controls that encounter virus attacks. All these aspects facilitate the demand-side system’s self-serve capability and operational robustness under extreme conditions and dangerous scenarios. The research that contributes to this thesis is grouped around and builds a general scheme to enhance the robustness of CPS demand-side energy system with outage considerations, communication network layouts, and virus intrusions. Under system outage, there are two layers for maximizing the duration of self-power supply duration in extreme conditions. The study first proposed a resilient energy management system for residential communities (CEMS), by scheduling and coordinating the battery energy storage system and energy consumption of houses/units. Moreover, it also proposed a hierarchical resilient energy management system (EMS) by fully considering the appliance-level local scheduling. The method also takes into account customer satisfaction and lifestyle preferences in order to form the optimal outcome. To further enhance the robustness of the CPS system, a complex multi-hop wireless remote metering network model for communication layout on the CPS demand side was proposed. This decreased the number and locations of data centers on the demand side and reduced the security risk of communication and the infrastructure cost of the smart grid for residential energy management. A novel evolutionary aggregation algorithm (EAA) was proposed to obtain the minimum number and locations of the local data centers required to fulfill the connectivity of the smart meters. Finally, the potential for virus attacks has also been studied as well. A trade-off strategy to confront viruses in the system with numerous network nodes is proposed. The allocation of antivirus programs and schemes are studied to avoid system crashes and achieve the minimum potential damages. A DOWNHILL-TRADE OFF algorithm is proposed to address an appropriate allocation strategy under the time evolution of the expected state of the network. Simulations are conducted using the data from the Smart Grid, Smart City national demonstration project trials

    Credibility-based Binary Feedback Model for Grid Resource Planning

    Get PDF
    Grid service providers (GSPs), in commercial grids, improve their profitability by maintaining the least possible set of resources to meet client demand. Their goal is to maximize profits by optimizing resource planning. In order to achieve such goal, they require feedback from clients to estimate demand for their service. The objective of this research is to develop an approach to build a useful value profile for a collection of heterogeneous grid clients. For developing the approach, we use binary feedback as the theoretical framework to build the value profile, which can be used as a proxy for a demand function that represents client's willingness-to-pay for grid resources. However, clients may require incentives to provide feedback and deterrents from selfish behavior, such as misrepresenting their true preferences to obtain superior services at lower costs. To address this concern, we use credibility mechanisms to detect untruthful feedback and penalize insincere or biased clients. We also use game theory to study how the cooperation can emerge.In this dissertation, we propose the use of credibility-based binary feedback to build value profiles, which GSPs can use to plan their resources economically. The use of value profiles aims to benefit both GSPs and clients, and helps to accelerate an adoption of commercial grids
    corecore