246 research outputs found

    ADDING PERSISTENCE TO MAIN MEMORY PROGRAMMING

    Get PDF
    Unlocking the true potential of the new persistent memories (PMEMs) requires eliminating traditional persistent I/O abstractions altogether, by introducing persistent semantics directly into main memory programming. Such a programming model elevates failure atomicity to a first-class application property in addition to in-memory data layout, concurrency-control, and fault tolerance, and therefore requires redesign of programming abstractions for both program correctness and maximum performance gains. To address these challenges, this thesis proposes a set of system software designs that integrate persistence with main memory programming, and makes the following contributions. First, this thesis proposes a PMEM-aware I/O runtime, NVStream, that supports fast durable streaming I/O. NVStream uses a memory-based I/O interface that integrates with existing I/O data movement operations of an application to accelerate persistent data writes. NVStream carefully designs its persistent data storage layout and crash-consistent semantics to match both application and PMEM characteristics. Specifically, we leverage the streaming nature of I/O in HPC workflows, to benefit from using a log-structured PMEM storage engine design, that uses relaxed write orderings and append-only failure-atomic semantics to form strongly consistent application checkpoints. Furthermore, we identify that optimizing the I/O software stack exposes the PMEM bandwidth limitations as a bottleneck during parallel HPC I/O writes, and propose a novel data movement design – PHX. PHX uses alternative network data movement paths available in datacenters to ease up the bandwidth pressure on the PMEM memory interconnects, all while maintaining the correctness of the persistent data. Next, the thesis explores the challenges and opportunities of using PMEM for true main memory persistent programming – a single data domain for both runtime and persistent applicationstate. Such a programming model includes maintaining ACID properties during each and every update to applications persistent structures. ACID-qualified persistent programming for multi-threaded applications is hard, as the programmer has to reason about both crash-consistency and synchronization – crash-sync – semantics for programming correctness. The thesis contributes new understanding of the correctness requirements for mixing different crash-consistent and synchronization protocols, characterizes the performance of different crash-sync realizations for different applications and hardware architectures, and draws actionable insights for future designs of PMEM systems. Finally, the application state stored on node-local persistent memory is still vulnerable to catastrophic node failures. The thesis proposes a replicated persistent memory runtime, Blizzard, that supports truly fault tolerant, concurrent and persistent data-structure programming. Blizzard carefully integrates userspace networking with byte addressable PMEM for a fast, persistent memory replication runtime. The design also incorporates a replication-aware crash-sync protocol that supports consistent and concurrent updates on persistent data-structures. Blizzard offers applications the flexibility to use the data structures that best match their functional requirements, while offering better performance, and providing crucial reliability guarantees lacking from existing persistent memory runtimes.Ph.D

    Cargo Logistics Airlift Systems Study (CLASS). Volume 3: Cross impact between the 1990 market and the air physical distribution systems, book 1

    Get PDF
    The interrelations between the infrastructure and the forecast future market are discussed. Also, using forecasts of market growth for a base, future aircraft and air service concepts were evaluated

    Optimization inWeb Caching: Cache Management, Capacity Planning, and Content Naming

    Full text link
    Caching is fundamental to performance in distributed information retrieval systems such as the World Wide Web. This thesis introduces novel techniques for optimizing performance and cost-effectiveness in Web cache hierarchies. When requests are served by nearby caches rather than distant servers, server loads and network traffic decrease and transactions are faster. Cache system design and management, however, face extraordinary challenges in loosely-organized environments like the Web, where the many components involved in content creation, transport, and consumption are owned and administered by different entities. Such environments call for decentralized algorithms in which stakeholders act on local information and private preferences. In this thesis I consider problems of optimally designing new Web cache hierarchies and optimizing existing ones. The methods I introduce span the Web from point of content creation to point of consumption: I quantify the impact of content-naming practices on cache performance; present techniques for variable-quality-of-service cache management; describe how a decentralized algorithm can compute economically-optimal cache sizes in a branching two-level cache hierarchy; and introduce a new protocol extension that eliminates redundant data transfers and allows “dynamic” content to be cached consistently. To evaluate several of my new methods, I conducted trace-driven simulations on an unprecedented scale. This in turn required novel workload measurement methods and efficient new characterization and simulation techniques. The performance benefits of my proposed protocol extension are evaluated using two extraordinarily large and detailed workload traces collected in a traditional corporate network environment and an unconventional thin-client system. My empirical research follows a simple but powerful paradigm: measure on a large scale an important production environment’s exogenous workload; identify performance bounds inherent in the workload, independent of the system currently serving it; identify gaps between actual and potential performance in the environment under study; and finally devise ways to close these gaps through component modifications or through improved inter-component integration. This approach may be applicable to a wide range of Web services as they mature.Ph.D.Computer Science and EngineeringUniversity of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/90029/1/kelly-optimization_web_caching.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/90029/2/kelly-optimization_web_caching.ps.bz

    Workshop proceedings: Information Systems for Space Astrophysics in the 21st Century, volume 1

    Get PDF
    The Astrophysical Information Systems Workshop was one of the three Integrated Technology Planning workshops. Its objectives were to develop an understanding of future mission requirements for information systems, the potential role of technology in meeting these requirements, and the areas in which NASA investment might have the greatest impact. Workshop participants were briefed on the astrophysical mission set with an emphasis on those missions that drive information systems technology, the existing NASA space-science operations infrastructure, and the ongoing and planned NASA information systems technology programs. Program plans and recommendations were prepared in five technical areas: Mission Planning and Operations; Space-Borne Data Processing; Space-to-Earth Communications; Science Data Systems; and Data Analysis, Integration, and Visualization

    Deep Underground Science and Engineering Laboratory - Preliminary Design Report

    Full text link
    The DUSEL Project has produced the Preliminary Design of the Deep Underground Science and Engineering Laboratory (DUSEL) at the rehabilitated former Homestake mine in South Dakota. The Facility design calls for, on the surface, two new buildings - one a visitor and education center, the other an experiment assembly hall - and multiple repurposed existing buildings. To support underground research activities, the design includes two laboratory modules and additional spaces at a level 4,850 feet underground for physics, biology, engineering, and Earth science experiments. On the same level, the design includes a Department of Energy-shepherded Large Cavity supporting the Long Baseline Neutrino Experiment. At the 7,400-feet level, the design incorporates one laboratory module and additional spaces for physics and Earth science efforts. With input from some 25 science and engineering collaborations, the Project has designed critical experimental space and infrastructure needs, including space for a suite of multidisciplinary experiments in a laboratory whose projected life span is at least 30 years. From these experiments, a critical suite of experiments is outlined, whose construction will be funded along with the facility. The Facility design permits expansion and evolution, as may be driven by future science requirements, and enables participation by other agencies. The design leverages South Dakota's substantial investment in facility infrastructure, risk retirement, and operation of its Sanford Laboratory at Homestake. The Project is planning education and outreach programs, and has initiated efforts to establish regional partnerships with underserved populations - regional American Indian and rural populations

    Data bases and data base systems related to NASA's Aerospace Program: A bibliography with indexes

    Get PDF
    This bibliography lists 641 reports, articles, and other documents introduced into the NASA scientific and technical information system during the period January 1, 1981 through June 30, 1982. The directory was compiled to assist in the location of numerical and factual data bases and data base handling and management systems

    Sixth Goddard Conference on Mass Storage Systems and Technologies Held in Cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems

    Get PDF
    This document contains copies of those technical papers received in time for publication prior to the Sixth Goddard Conference on Mass Storage Systems and Technologies which is being held in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems at the University of Maryland-University College Inn and Conference Center March 23-26, 1998. As one of an ongoing series, this Conference continues to provide a forum for discussion of issues relevant to the management of large volumes of data. The Conference encourages all interested organizations to discuss long term mass storage requirements and experiences in fielding solutions. Emphasis is on current and future practical solutions addressing issues in data management, storage systems and media, data acquisition, long term retention of data, and data distribution. This year's discussion topics include architecture, tape optimization, new technology, performance, standards, site reports, vendor solutions. Tutorials will be available on shared file systems, file system backups, data mining, and the dynamics of obsolescence

    Data systems elements technology assessment and system specifications, issue no. 2

    Get PDF
    The ability to satisfy the objectives of future NASA Office of Applications programs is dependent on technology advances in a number of areas of data systems. The hardware and software technology of end-to-end systems (data processing elements through ground processing, dissemination, and presentation) are examined in terms of state of the art, trends, and projected developments in the 1980 to 1985 timeframe. Capability is considered in terms of elements that are either commercially available or that can be implemented from commercially available components with minimal development
    • …
    corecore