153 research outputs found
Memory Subsystems for Security, Consistency, and Scalability
In response to the continuous demand for the ability to process ever larger datasets, as well as discoveries in next-generation memory technologies, researchers have been vigorously studying memory-driven computing architectures that shall allow data-intensive applications to access enormous amounts of pooled non-volatile memory. As applications continue to interact with increasing amounts of components and datasets, existing systems struggle to eÿciently enforce the principle of least privilege for security. While non-volatile memory can retain data even after a power loss and allow for large main memory capacity, programmers have to bear the burdens of maintaining the consistency of program memory for fault tolerance as well as handling huge datasets with traditional yet expensive memory management interfaces for scalability. Today’s computer systems have become too sophisticated for existing memory subsystems to handle many design requirements. In this dissertation, we introduce three memory subsystems to address challenges in terms of security, consistency, and scalability. Specifcally, we propose SMVs to provide threads with fne-grained control over access privileges for a partially shared address space for security, NVthreads to allow programmers to easily leverage nonvolatile memory with automatic persistence for consistency, and PetaMem to enable memory-centric applications to freely access memory beyond the traditional process boundary with support for memory isolation and crash recovery for security, consistency, and scalability
The Atomic Manifesto: a Story in Four Quarks
This report summarizes the viewpoints and insights gathered in the Dagstuhl Seminar on Atomicity in System Design and Execution, which was attended by 32 people from four different scientific communities: database and transaction processing systems, fault tolerance and dependable systems, formal methods for system design and correctness reasoning, and hardware architecture and programming languages. Each community presents its position in interpreting the notion of atomicity and the existing state of the art, and each community identifies scientific challenges that should be addressed in future work. In addition, the report discusses common themes across communities and strategic research problems that require multiple communities to team up for a viable solution.
The general theme of how to specify, implement, compose, and reason about extended
and relaxed notions of atomicity is viewed as a key piece in coping with
the pressing issue of building and maintaining highly dependable systems that
comprise many components with complex interaction patterns
AOP: Does it Make Sense ? The Case of Concurrency and Failures
Concurrency and failures are fundamental problems in distributed computing. One likes to think that the mechanisms needed to address these problems can be separated from the rest of the distributed application: in modern words, these mechanisms could be aspectized. Does this however make sense? This paper relates an experience that conveys our initial and indeed biased intuition that the answer is in general no. Except for simple academic examples, it is hard and even potentially dangerous to separate concurrency control and failure management from the actual application. We point out the very facts that (1) an aspect-oriented language can, pretty much like a macro language, be beneficial for code factorization (but should be reserved to experienced programmers), and (2) concurrency and failures are particularly hard to aspectize because they are usually part of the phenomenon that objects should simulate. They are in this sense different than other concerns, like for instance tracing, which might be easier to aspectize
Open Multithreaded Transactions: A Transaction Model for Concurrent Object-Oriented Programming
To read the abstract, please go to my PhD home page
Efficient Logging in Non-Volatile Memory by Exploiting Coherency Protocols
Non-volatile memory (NVM) technologies such as PCM, ReRAM and STT-RAM allow
processors to directly write values to persistent storage at speeds that are
significantly faster than previous durable media such as hard drives or SSDs.
Many applications of NVM are constructed on a logging subsystem, which enables
operations to appear to execute atomically and facilitates recovery from
failures. Writes to NVM, however, pass through a processor's memory system,
which can delay and reorder them and can impair the correctness and cost of
logging algorithms.
Reordering arises because of out-of-order execution in a CPU and the
inter-processor cache coherence protocol. By carefully considering the
properties of these reorderings, this paper develops a logging protocol that
requires only one round trip to non-volatile memory while avoiding expensive
computations. We show how to extend the logging protocol to building a
persistent set (hash map) that also requires only a single round trip to
non-volatile memory for insertion, updating, or deletion
Static detection of anomalies in transactional memory programs
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia InformáticaTransactional Memory (TM) is an approach to concurrent programming based on the transactional semantics borrowed from database systems. In this paradigm, a transaction is a sequence of actions that may execute in a single logical instant, as though it was the only one being executed
at that moment. Unlike concurrent systems based in locks, TM does not enforce that a
single thread is performing the guarded operations. Instead, like in database systems, transactions execute concurrently, and the effects of a transaction are undone in case of a conflict, as though it never happened. The advantages of TM are an easier and less error-prone programming model, and a potential increase in scalability and performance.
In spite of these advantages, TM is still a young and immature technology, and has still
to become an established programming model. It still lacks the paraphernalia of tools and
standards which we have come to expect from a widely used programming paradigm. Testing
and analysis techniques and algorithms for TM programs are also just starting to be addressed by the scientific community, making this a leading research work is many of these aspects.
This work is aimed at statically identifying possible runtime anomalies in TMprograms. We
addressed both low-level dataraces in TM programs, as well as high-level anomalies resulting from incorrect splitting of transactions.
We have defined and implemented an approach to detect low-level dataraces in TM programs
by converting all the memory transactions into monitor protected critical regions, synchronized on a newly generated global lock. To validate the approach, we have applied our tool to a set of tests, adapted from the literature, that contain well documented errors.
We have also defined and implemented a new approach to static detection of high-level
concurrency anomalies in TM programs. This new approach works by conservatively tracing
transactions, and matching the interference between each consecutive pair of transactions
against a set of defined anomaly patterns. Once again, the approach was validated with well documented tests adapted from the literature
Programming Persistent Memory
Beginning and experienced programmers will use this comprehensive guide to persistent memory programming. You will understand how persistent memory brings together several new software/hardware requirements, and offers great promise for better performance and faster application startup times—a huge leap forward in byte-addressable capacity compared with current DRAM offerings. This revolutionary new technology gives applications significant performance and capacity improvements over existing technologies. It requires a new way of thinking and developing, which makes this highly disruptive to the IT/computing industry. The full spectrum of industry sectors that will benefit from this technology include, but are not limited to, in-memory and traditional databases, AI, analytics, HPC, virtualization, and big data. Programming Persistent Memory describes the technology and why it is exciting the industry. It covers the operating system and hardware requirements as well as how to create development environments using emulated or real persistent memory hardware. The book explains fundamental concepts; provides an introduction to persistent memory programming APIs for C, C++, JavaScript, and other languages; discusses RMDA with persistent memory; reviews security features; and presents many examples. Source code and examples that you can run on your own systems are included. What You’ll Learn Understand what persistent memory is, what it does, and the value it brings to the industry Become familiar with the operating system and hardware requirements to use persistent memory Know the fundamentals of persistent memory programming: why it is different from current programming methods, and what developers need to keep in mind when programming for persistence Look at persistent memory application development by example using the Persistent Memory Development Kit (PMDK) Design and optimize data structures for persistent memory Study how real-world applications are modified to leverage persistent memory Utilize the tools available for persistent memory programming, application performance profiling, and debugging Who This Book Is For C, C++, Java, and Python developers, but will also be useful to software, cloud, and hardware architects across a broad spectrum of sectors, including cloud service providers, independent software vendors, high performance compute, artificial intelligence, data analytics, big data, etc
Recommended from our members
Providing Easy to Use and Fast Programming Support for Non-Volatile Memories
Non-Volatile Memory (NVM) technologies, such as 3D XPoint, offer DRAM-like performance and byte-addressable access to persistent data. NVMs promise an opportunity for fast, persistent data structures, and a wide range of applications stand to benefit from the performance potential of these technologies. These potential benefits are greatest when applications access NVM directly via load/store instructions rather than conventional file-based interfaces. Directly accessing NVM presents several challenges. In particular, applications need guaranteed consistency and safety semantics to protect their data structures in the face of system failures and programming errors.Implementing data structures that meet these requirements is challenging and error-prone. Existing methods for building persistent data structures require either in-depth code changes to an existing data structure or rewriting the data structure from scratch. Unfortunately, both of these methods are labor-intensive and error-prone.Failure-atomicity libraries and programming language extensions can simplify this task. However, all the proposed solutions either require pervasive changes to existing software or incur unacceptable overheads to runtime performance. As a result, porting legacy applications to leverage NVM is likely to be prohibitively difficult and time-consuming.This dissertation first presents Breeze, an NVM toolchain that minimizes the changes necessary to enable legacy code to reap the benefits of directly accessing NVM. In contrast to PMDK and NVM-Direct, Breeze reduces the programming effort of porting Memcached and MongoDB by up to 2.8Ă—, while providing equal or superior performance.Second, it introduces NVHooks, a compiler that automatically annotates NVM accesses and avoids disruptive and error-prone changes to programs. NVHooks reduces the cost of these annotations by applying novel, NVM-specific optimizations to their placement. For our tested benchmarks, NVHooks matches the performance of hand-annotated code while minimizing programmer effort.Finally, it presents Pronto, a new NVM library that reduces the programming effort required to add persistence to volatile data structures. Pronto uses asynchronous semantic logging (ASL) to allow adding persistence to the existing volatile data structure (e.g., C++ Standard Template Library containers) with minor programming effort. ASL moves most durability code off the critical path. Our evaluation shows Pronto data structures outperform highly-optimized NVM data structures by a large margin
- …