534 research outputs found
Memory consistency directed cache coherence protocols for scalable multiprocessors
The memory consistency model, which formally specifies the behavior of the
memory system, is used by programmers to reason about parallel programs. From a
hardware design perspective, weaker consistency models permit various optimizations
in a multiprocessor system: this thesis focuses on designing and optimizing the cache
coherence protocol for a given target memory consistency model.
Traditional directory coherence protocols are designed to be compatible with the
strictest memory consistency model, sequential consistency (SC). When they are used
for chip multiprocessors (CMPs) that provide more relaxed memory consistency models,
such protocols turn out to be unnecessarily strict. Usually, this comes at the cost of
scalability, in terms of per-core storage due to sharer tracking, which poses a problem
with increasing number of cores in today’s CMPs, most of which no longer are sequentially
consistent. The recent convergence towards programming language based relaxed
memory consistency models has sparked renewed interest in lazy cache coherence
protocols. These protocols exploit synchronization information by enforcing coherence
only at synchronization boundaries via self-invalidation. As a result, such protocols do
not require sharer tracking which benefits scalability. On the downside, such protocols
are only readily applicable to a restricted set of consistency models, such as Release
Consistency (RC), which expose synchronization information explicitly. In particular,
existing architectures with stricter consistency models (such as x86) cannot readily
make use of lazy coherence protocols without either: adapting the protocol to satisfy
the stricter consistency model; or changing the architecture’s consistency model to (a
variant of) RC, typically at the expense of backward compatibility. The first part of
this thesis explores both these options, with a focus on a practical approach satisfying
backward compatibility.
Because of the wide adoption of Total Store Order (TSO) and its variants in x86 and
SPARC processors, and existing parallel programs written for these architectures, we
first propose TSO-CC, a lazy cache coherence protocol for the TSO memory consistency
model. TSO-CC does not track sharers and instead relies on self-invalidation and
detection of potential acquires (in the absence of explicit synchronization) using per
cache line timestamps to efficiently and lazily satisfy the TSO memory consistency
model. Our results show that TSO-CC achieves, on average, performance comparable
to a MESI directory protocol, while TSO-CC’s storage overhead per cache line scales
logarithmically with increasing core count.
Next, we propose an approach for the x86-64 architecture, which is a compromise
between retaining the original consistency model and using a more storage efficient
lazy coherence protocol. First, we propose a mechanism to convey synchronization
information via a simple ISA extension, while retaining backward compatibility with
legacy codes and older microarchitectures. Second, we propose RC3 (based on TSOCC),
a scalable cache coherence protocol for RCtso, the resulting memory consistency
model. RC3 does not track sharers and relies on self-invalidation on acquires. To
satisfy RCtso efficiently, the protocol reduces self-invalidations transitively using per-L1
timestamps only. RC3 outperforms a conventional lazy RC protocol by 12%, achieving
performance comparable to a MESI directory protocol for RC optimized programs.
RC3’s storage overhead per cache line scales logarithmically with increasing core count
and reduces on-chip coherence storage overheads by 45% compared to TSO-CC.
Finally, it is imperative that hardware adheres to the promised memory consistency
model. Indeed, consistency directed coherence protocols cannot use conventional coherence
definitions (e.g. SWMR) to be verified against, and few existing verification
methodologies apply. Furthermore, as the full consistency model is used as a specification,
their interaction with other components (e.g. pipeline) of a system must not be
neglected in the verification process. Therefore, verifying a system with such protocols
in the context of interacting components is even more important than before. One
common way to do this is via executing tests, where specific threads of instruction
sequences are generated and their executions are checked for adherence to the consistency
model. It would be extremely beneficial to execute such tests under simulation,
i.e. when the functional design implementation of the hardware is being prototyped.
Most prior verification methodologies, however, target post-silicon environments, which
when used for simulation-based memory consistency verification would be too slow.
We propose McVerSi, a test generation framework for fast memory consistency
verification of a full-system design implementation under simulation. Our primary
contribution is a Genetic Programming (GP) based approach to memory consistency test
generation, which relies on a novel crossover function that prioritizes memory operations
contributing to non-determinism, thereby increasing the probability of uncovering
memory consistency bugs. To guide tests towards exercising as much logic as possible,
the simulator’s reported coverage is used as the fitness function. Furthermore, we
increase test throughput by making the test workload simulation-aware. We evaluate
our proposed framework using the Gem5 cycle accurate simulator in full-system mode
with Ruby (with configurations that use Gem5’s MESI protocol, and our proposed
TSO-CC together with an out-of-order pipeline). We discover 2 new bugs in the MESI
protocol due to the faulty interaction of the pipeline and the cache coherence protocol,
highlighting that even conventional protocols should be verified rigorously in the
context of a full-system. Crucially, these bugs would not have been discovered through
individual verification of the pipeline or the coherence protocol. We study 11 bugs
in total. Our GP-based test generation approach finds all bugs consistently, therefore
providing much higher guarantees compared to alternative approaches (pseudo-random
test generation and litmus tests)
Object replication in a distributed system
PhD ThesisA number of techniques have been proposed for the construction of fault—tolerant
applications. One of these techniques is to replicate vital system resources so that if one
copy fails sufficient copies may still remain operational to allow the application to
continue to function. Interactions with replicated resources are inherently more complex
than non—replicated interactions, and hence some form of replication transparency is
necessary. This may be achieved by employing replica consistency protocols to mask replica
failures and maintain consistency of state between functioning replicas.
To achieve consistency between replicas it is necessary to ensure that all replicas
receive the same set of messages in the same order, despite failures at the senders and
receivers. This can be accomplished by making use of order preserving reliable
communication protocols. However, we shall show how it can be more efficient to use
unordered reliable communication and to impose ordering at the application level, by
making use of syntactic knowledge of the application.
This thesis develops techniques for replicating objects: in general this is harder than
replicating data, as objects (which can contain data) can contain calls on other objects.
Handling replicated objects is essentially the same as handling replicated computations,
and presents more problems than simply replicating data. We shall use the concept of the
object to provide transparent replication to users: a user will interact with only a single
object interface which hides the fact that the object is actually replicated.
The main aspects of the replication scheme presented in this thesis have been fully
implemented and tested. This includes the design and implementation of a replicated
object invocation protocol and the algorithms which ensure that (replicated) atomic
actions can manipulate replicated objects.Research Studentship, Science and Engineering Research Council.
Esprit Project 2267 (Integrated Systems Architecture)
Towards Scalable Real-time Analytics:: An Architecture for Scale-out of OLxP Workloads
We present an overview of our work on the SAP HANA Scale-out Extension, a novel distributed database architecture designed to support large scale analytics over real-time data. This platform permits high performance OLAP with massive scale-out capabilities, while concurrently allowing OLTP workloads. This dual capability enables analytics over real-time changing data and allows fine grained user-specified service level agreements (SLAs) on data freshness. We advocate the decoupling of core database components such as query processing, concurrency control, and persistence, a design choice made possible by advances in high-throughput low-latency networks and storage devices. We provide full ACID guarantees and build on a logical timestamp mechanism to provide MVCC-based snapshot isolation, while not requiring synchronous updates of replicas. Instead, we use asynchronous update propagation guaranteeing consistency with timestamp validation. We provide a view into the design and development of a large scale data management platform for real-time analytics, driven by the needs of modern enterprise customers
Scalable and fault-tolerant data stream processing on multi-core architectures
With increasing data volumes and velocity, many applications are shifting from the classical “process-after-store” paradigm to a stream processing model: data is produced and consumed as continuous streams. Stream processing captures latency-sensitive applications as diverse as credit card fraud detection and high-frequency trading. These applications are expressed as queries of algebraic operations (e.g., aggregation) over the most recent data using windows, i.e., finite evolving views over the input streams. To guarantee correct results, streaming applications require precise window semantics (e.g., temporal ordering) for operations that maintain state.
While high processing throughput and low latency are performance desiderata for stateful streaming applications, achieving both poses challenges. Computing the state of overlapping windows causes redundant aggregation operations: incremental execution (i.e., reusing previous results) reduces latency but prevents parallelization; at the same time, parallelizing window execution for stateful operations with precise semantics demands ordering guarantees and state access coordination. Finally, streams and state must be recovered to produce consistent and repeatable results in the event of failures.
Given the rise of shared-memory multi-core CPU architectures and high-speed networking, we argue that it is possible to address these challenges in a single node without compromising window semantics, performance, or fault-tolerance. In this thesis, we analyze, design, and implement stream processing engines (SPEs) that achieve high performance on multi-core architectures. To this end, we introduce new approaches for in-memory processing that address the previous challenges: (i) for overlapping windows, we provide a family of window aggregation techniques that enable computation sharing based on the algebraic properties of aggregation functions; (ii) for parallel window execution, we balance parallelism and incremental execution by developing abstractions for both and combining them to a novel design; and (iii) for reliable single-node execution, we enable strong fault-tolerance guarantees without sacrificing performance by reducing the required disk I/O bandwidth using a novel persistence model. We combine the above to implement an SPE that processes hundreds of millions of tuples per second with sub-second latencies. These results reveal the opportunity to reduce resource and maintenance footprint by replacing cluster-based SPEs with single-node deployments.Open Acces
True shared memory architecture for next-generation multi-GPU systems
Machine learning (ML) is now omnipresent in all spheres of life. The use of deep neural networks (DNNs) for ML has gained popularity over the past few years. This is because DNNs are capable of efficiently solving complex problems such as image processing, object detection, language processing, etc. To train these DNN workloads, graphics process- ing units (GPUs) have become the most widely used platform. A GPU can support a large number of parallel threads that execute simultaneously to achieve a very high throughput. However, as the sizes of the DNN workloads grow, a single GPU is no longer adequate to provide fast training, and developers resort to using multi-GPU (MGPU) systems that can reduce the training time significantly. Consequently, to keep pace with the growth of DNN applications, GPU vendors are actively developing novel and efficient MGPU systems.
To better understand the challenges associated with designing MGPU systems for DNN workloads, in this thesis, we first present our efforts to understand the behavior of the DNN workloads, in particular, the training of DNN workloads on MGPU systems. Using the DNN workloads as benchmarks, we observe the evolution of MGPU system architecture. Based on our profiling and characterization of DNN workloads on existing high-performance MGPU systems, we identify the computation- and communication- intensiveness of the DNN workloads and the hardware- and software-level inefficiencies present in the existing MGPU systems. We find that the data movement across multiple GPUs and high remote data access cost leading to NUMA effects, data duplication, and inefficient use of GPU memory leading to memory capacity issues, and the complexity in programming MGPUs pose serious limitations in the execution of ever-scaling DNN workloads on MGPU systems.
To overcome the limitations of existing MGPU systems, we propose to unify the main memory of GPUs to design an MGPU system with true shared memory (MGPU-TSM). Our proposed MGPU-TSM system demonstrates a significant performance boost (3.8× for a 4 GPU system) over the best-performing existing MGPU system. This is because MGPU-TSM system eliminates the NUMA effects and the necessity for data duplication. To provide seamless data sharing across multiple GPUs and ease programming of MGPU- TSM, we propose a light-weight coherence protocol called MGCC. MGCC is a timestamp- based protocol that provides both intra- and inter-GPU coherence. We implement a number of hardware features including unified memory controller, request tracker and timestamp storage unit to support MGCC. Using both standard and synthetic stress benchmarks, we evaluate the MGPU-TSM system with MGCC leveraging sequential as well as relaxed consistency. Our evaluation of a 4-GPU system using MGPUSim simulator suggests that our proposed coherent MGPU system achieves up to 3.8× improved performance than current best-performing MGPU system while the stress tests performed using synthetic benchmarks suggests that MGCC leads to up to 46.1% performance overhead
Efficient Precise Dynamic Data Race Detection For Cpu And Gpu
Data races are notorious bugs. They introduce non-determinism in programs behavior, complicate programs semantics, making it challenging to debug parallel programs. To make parallel programming easier, efficient data race detection has been a research topic in the last decades. However, existing data race detectors either sacrifice precision or incur high overhead, limiting their application to real-world applications and scenarios. This dissertation proposes approaches to improve the performance of dynamic data race detection without undermining precision, by identifying and removing metadata redundancy dynamically. This dissertation also explores ways to make it practical to detect data races dynamically for GPU programs, which has a disparate programming and execution model from CPU workloads. Further, this dissertation shows how the structured synchronization model in GPU programs can simplify the algorithm design of
data race detection for GPU, and how the unique patterns in GPU workloads enable an efficient implementation of the algorithm, yielding a high-performance dynamic data race detector for GPU programs
Parallel-Architecture Simulator Development Using Hardware Transactional Memory
To address the need for a simpler parallel programming model, Transactional Memory (TM) has been developed and promises good parallel performance with easy-to-write parallel code. Unlike lock-based approaches, with TM, programmers do not need to explicitly specify and manage the synchronization among threads. However, programmers simply mark code segments as transactions, and the TM system manages the concurrency control for them. TM can be implemented either in software (STM) or hardware (HTM). STMs are more flexible but suffer from serious performance overheads whereas HTMs are faster but limited due to hardware space constrains. We present an implementation of a HTM system, based on an existing protocol (Scalable-TCC), over a full-system simulator. We provide a memory system that allows for a configurable number of cache entries, associativity, cache-line size, and all the access timings in the memory hierarchy. Combined with a powerful statistics system that provides all the necessary information to extract conclusions from the transactional executions. We evaluate our HTM system using applications that cover a wide range of transactional behaviours and demonstrate that it scales efficiently up to 32 processors
Reliable massively parallel symbolic computing : fault tolerance for a distributed Haskell
As the number of cores in manycore systems grows exponentially, the number of failures is
also predicted to grow exponentially. Hence massively parallel computations must be able to
tolerate faults. Moreover new approaches to language design and system architecture are needed
to address the resilience of massively parallel heterogeneous architectures.
Symbolic computation has underpinned key advances in Mathematics and Computer Science,
for example in number theory, cryptography, and coding theory. Computer algebra software
systems facilitate symbolic mathematics. Developing these at scale has its own distinctive
set of challenges, as symbolic algorithms tend to employ complex irregular data and control
structures. SymGridParII is a middleware for parallel symbolic computing on massively parallel
High Performance Computing platforms. A key element of SymGridParII is a domain specific
language (DSL) called Haskell Distributed Parallel Haskell (HdpH). It is explicitly designed for
scalable distributed-memory parallelism, and employs work stealing to load balance dynamically
generated irregular task sizes.
To investigate providing scalable fault tolerant symbolic computation we design, implement
and evaluate a reliable version of HdpH, HdpH-RS. Its reliable scheduler detects and handles
faults, using task replication as a key recovery strategy. The scheduler supports load balancing
with a fault tolerant work stealing protocol. The reliable scheduler is invoked with two fault
tolerance primitives for implicit and explicit work placement, and 10 fault tolerant parallel
skeletons that encapsulate common parallel programming patterns. The user is oblivious to
many failures, they are instead handled by the scheduler.
An operational semantics describes small-step reductions on states. A simple abstract machine
for scheduling transitions and task evaluation is presented. It defines the semantics of
supervised futures, and the transition rules for recovering tasks in the presence of failure. The
transition rules are demonstrated with a fault-free execution, and three executions that recover
from faults.
The fault tolerant work stealing has been abstracted in to a Promela model. The SPIN
model checker is used to exhaustively search the intersection of states in this automaton to
validate a key resiliency property of the protocol. It asserts that an initially empty supervised
future on the supervisor node will eventually be full in the presence of all possible combinations
of failures.
The performance of HdpH-RS is measured using five benchmarks. Supervised scheduling
achieves a speedup of 757 with explicit task placement and 340 with lazy work stealing when
executing Summatory Liouville up to 1400 cores of a HPC architecture. Moreover, supervision
overheads are consistently low scaling up to 1400 cores. Low recovery overheads are observed in
the presence of frequent failure when lazy on-demand work stealing is used. A Chaos Monkey
mechanism has been developed for stress testing resiliency with random failure combinations.
All unit tests pass in the presence of random failure, terminating with the expected results
SoK: Diving into DAG-based Blockchain Systems
Blockchain plays an important role in cryptocurrency markets and technology
services. However, limitations on high latency and low scalability retard their
adoptions and applications in classic designs. Reconstructed blockchain systems
have been proposed to avoid the consumption of competitive transactions caused
by linear sequenced blocks. These systems, instead, structure
transactions/blocks in the form of Directed Acyclic Graph (DAG) and
consequently re-build upper layer components including consensus, incentives,
\textit{etc.} The promise of DAG-based blockchain systems is to enable fast
confirmation (complete transactions within million seconds) and high
scalability (attach transactions in parallel) without significantly
compromising security. However, this field still lacks systematic work that
summarises the DAG technique. To bridge the gap, this Systematization of
Knowledge (SoK) provides a comprehensive analysis of DAG-based blockchain
systems. Through deconstructing open-sourced systems and reviewing academic
researches, we conclude the main components and featured properties of systems,
and provide the approach to establish a DAG. With this in hand, we analyze the
security and performance of several leading systems, followed by discussions
and comparisons with concurrent (scaling blockchain) techniques. We further
identify open challenges to highlight the potentiality of DAG-based solutions
and indicate their promising directions for future research.Comment: Full versio
- …