267 research outputs found
TANDEM: taming failures in next-generation datacenters with emerging memory
The explosive growth of online services, leading to unforeseen scales, has made modern datacenters highly prone to failures. Taming these failures hinges on fast and correct recovery, minimizing service interruptions.
Applications, owing to recovery, entail additional measures to maintain a recoverable state of data and computation logic during their failure-free execution. However, these precautionary measures have
severe implications on performance, correctness, and programmability, making recovery incredibly challenging to realize in practice.
Emerging memory, particularly non-volatile memory (NVM) and disaggregated memory (DM), offers a promising opportunity to achieve fast recovery with maximum performance. However, incorporating these technologies into datacenter architecture presents significant challenges; Their distinct architectural attributes, differing significantly from traditional memory devices, introduce new semantic challenges for
implementing recovery, complicating correctness and programmability.
Can emerging memory enable fast, performant, and correct recovery in the datacenter? This thesis aims to answer this question while addressing the associated challenges.
When architecting datacenters with emerging memory, system architects face four key challenges: (1) how to guarantee correct semantics; (2) how to efficiently enforce correctness with optimal performance; (3) how to validate end-to-end correctness including recovery; and (4) how to preserve programmer productivity (Programmability).
This thesis aims to address these challenges through the following approaches: (a)
defining precise consistency models that formally specify correct end-to-end semantics
in the presence of failures (consistency models also play a crucial role in programmability); (b) developing new low-level mechanisms to efficiently enforce the prescribed models given the capabilities of emerging memory; and (c) creating robust testing frameworks to validate end-to-end correctness and recovery.
We start our exploration with non-volatile memory (NVM), which offers fast persistence capabilities directly accessible through the processor’s load-store (memory) interface. Notably, these capabilities can be leveraged to enable fast recovery for Log-Free Data Structures (LFDs) while maximizing performance. However, due to the complexity of modern cache hierarchies, data hardly persist in any specific order, jeop-
ardizing recovery and correctness. Therefore, recovery needs primitives that explicitly control the order of updates to NVM (known as persistency models). We outline the precise specification of a novel persistency model – Release Persistency (RP) – that provides a consistency guarantee for LFDs on what remains in non-volatile memory upon failure. To efficiently enforce RP, we propose a novel microarchitecture mechanism,
lazy release persistence (LRP). Using standard LFDs benchmarks, we show that LRP achieves fast recovery while incurring minimal overhead on performance.
We continue our discussion with memory disaggregation which decouples memory from traditional monolithic servers, offering a promising pathway for achieving very high availability in replicated in-memory data stores. Achieving such availability hinges on transaction protocols that can efficiently handle recovery in this setting, where
compute and memory are independent. However, there is a challenge: disaggregated memory (DM) fails to work with RPC-style protocols, mandating one-sided transaction protocols. Exacerbating the problem, one-sided transactions expose critical low-level
ordering to architects, posing a threat to correctness. We present a highly available transaction protocol, Pandora, that is specifically designed to achieve fast recovery in disaggregated key-value stores (DKVSes).
Pandora is the first one-sided transactional protocol that ensures correct, non-blocking, and fast recovery in DKVS. Our experimental implementation artifacts demonstrate that Pandora achieves fast recovery and high availability while causing minimal disruption to services.
Finally, we introduce a novel target litmus-testing framework – DART – to validate the end-to-end correctness of transactional protocols with recovery. Using DART’s target testing capabilities, we have found several critical bugs in Pandora, highlighting the need for robust end-to-end testing methods in the design loop to iteratively fix correctness bugs. Crucially, DART is lightweight and black-box, thereby eliminating
any intervention from the programmers
ControlPULP: A RISC-V On-Chip Parallel Power Controller for Many-Core HPC Processors with FPGA-Based Hardware-In-The-Loop Power and Thermal Emulation
High-Performance Computing (HPC) processors are nowadays integrated
Cyber-Physical Systems demanding complex and high-bandwidth closed-loop power
and thermal control strategies. To efficiently satisfy real-time multi-input
multi-output (MIMO) optimal power requirements, high-end processors integrate
an on-die power controller system (PCS).
While traditional PCSs are based on a simple microcontroller (MCU)-class
core, more scalable and flexible PCS architectures are required to support
advanced MIMO control algorithms for managing the ever-increasing number of
cores, power states, and process, voltage, and temperature variability.
This paper presents ControlPULP, an open-source, HW/SW RISC-V parallel PCS
platform consisting of a single-core MCU with fast interrupt handling coupled
with a scalable multi-core programmable cluster accelerator and a specialized
DMA engine for the parallel acceleration of real-time power management
policies. ControlPULP relies on FreeRTOS to schedule a reactive power control
firmware (PCF) application layer.
We demonstrate ControlPULP in a power management use-case targeting a
next-generation 72-core HPC processor. We first show that the multi-core
cluster accelerates the PCF, achieving 4.9x speedup compared to single-core
execution, enabling more advanced power management algorithms within the
control hyper-period at a shallow area overhead, about 0.1% the area of a
modern HPC CPU die. We then assess the PCS and PCF by designing an FPGA-based,
closed-loop emulation framework that leverages the heterogeneous SoCs paradigm,
achieving DVFS tracking with a mean deviation within 3% the plant's thermal
design power (TDP) against a software-equivalent model-in-the-loop approach.
Finally, we show that the proposed PCF compares favorably with an
industry-grade control algorithm under computational-intensive workloads.Comment: 33 pages, 11 figure
Towards Scalable OLTP Over Fast Networks
Online Transaction Processing (OLTP) underpins real-time data processing in many mission-critical applications, from banking to e-commerce.
These applications typically issue short-duration, latency-sensitive transactions that demand immediate processing.
High-volume applications, such as Alibaba's e-commerce platform, achieve peak transaction rates as high as 70 million transactions per second, exceeding the capacity of a single machine.
Instead, distributed OLTP database management systems (DBMS) are deployed across multiple powerful machines.
Historically, such distributed OLTP DBMSs have been primarily designed to avoid network communication, a paradigm largely unchanged since the 1980s.
However, fast networks challenge the conventional belief that network communication is the main bottleneck.
In particular, emerging network technologies, like Remote Direct Memory Access (RDMA), radically alter how data can be accessed over a network.
RDMA's primitives allow direct access to the memory of a remote machine within an order of magnitude of local memory access.
This development invalidates the notion that network communication is the primary bottleneck.
Given that traditional distributed database systems have been designed with the premise that the network is slow, they cannot efficiently exploit these fast network primitives, which requires us to reconsider how we design distributed OLTP systems.
This thesis focuses on the challenges RDMA presents and its implications on the design of distributed OLTP systems.
First, we examine distributed architectures to understand data access patterns and scalability in modern OLTP systems.
Drawing on these insights, we advocate a distributed storage engine optimized for high-speed networks.
The storage engine serves as the foundation of a database, ensuring efficient data access through three central components: indexes, synchronization primitives, and buffer management (caching).
With the introduction of RDMA, the landscape of data access has undergone a significant transformation.
This requires a comprehensive redesign of the storage engine components to exploit the potential of RDMA and similar high-speed network technologies.
Thus, as the second contribution, we design RDMA-optimized tree-based indexes — especially applicable for disaggregated databases to access remote data efficiently.
We then turn our attention to the unique challenges of RDMA.
One-sided RDMA, one of the network primitives introduced by RDMA, presents a performance advantage in enabling remote memory access while bypassing the remote CPU and the operating system.
This allows the remote CPU to process transactions uninterrupted, with no requirement to be on hand for network communication. However, that way, specialized one-sided RDMA synchronization primitives are required since traditional CPU-driven primitives are bypassed.
We found that existing RDMA one-sided synchronization schemes are unscalable or, even worse, fail to synchronize correctly, leading to hard-to-detect data corruption.
As our third contribution, we address this issue by offering guidelines to build scalable and correct one-sided RDMA synchronization primitives.
Finally, recognizing that maintaining all data in memory becomes economically unattractive, we propose a distributed buffer manager design that efficiently utilizes cost-effective NVMe flash storage.
By leveraging low-latency RDMA messages, our buffer manager provides a transparent memory abstraction, accessing the aggregated DRAM and NVMe storage across nodes.
Central to our approach is a distributed caching protocol that dynamically caches data.
With this approach, our system can outperform RDMA-enabled in-memory distributed databases while managing larger-than-memory datasets efficiently
Joint Time-and Event-Triggered Scheduling in the Linux Kernel
There is increasing interest in using Linux in the real-time domain due to
the emergence of cloud and edge computing, the need to decrease costs, and the
growing number of complex functional and non-functional requirements of
real-time applications. Linux presents a valuable opportunity as it has rich
hardware support, an open-source development model, a well-established
programming environment, and avoids vendor lock-in. Although Linux was
initially developed as a general-purpose operating system, some real-time
capabilities have been added to the kernel over many years to increase its
predictability and reduce its scheduling latency. Unfortunately, Linux
currently has no support for time-triggered (TT) scheduling, which is widely
used in the safety-critical domain for its determinism, low run-time scheduling
latency, and strong isolation properties. We present an enhancement of the
Linux scheduler as a new low-overhead TT scheduling class to support offline
table-driven scheduling of tasks on multicore Linux nodes. Inspired by the Slot
shifting algorithm, we complement the new scheduling class with a low overhead
slot shifting manager running on a non-time-triggered core to provide
guaranteed execution time to real-time aperiodic tasks by using the slack of
the time-triggered tasks and avoiding high-overhead table regeneration for
adding new periodic tasks. Furthermore, we evaluate our implementation on
server-grade hardware with Intel Xeon Scalable Processor.Comment: to appear in Operating Systems Platforms for Embedded Real-Time
applications (OSPERT) workshop 2023 co-hosted with 35th Euromicro conference
on Real-time system
It is too hot in here! A performance, energy and heat aware scheduler for Asymmetric multiprocessing processors in embedded systems.
Modern architecture present in self-power devices such as mobiles or tablet computers proposes the use of asymmetric processors that allow either energy-efficient or performant computation on the same SoC. For energy efficiency and performance consideration, the asymmetry resides in differences in CPU micro-architecture design and results in diverging raw computing capability. Other components such as the processor memory subsystem also show differences resulting in different memory transaction timing. Moreover, based on a bus-snoop protocol, cache coherency between processors comes with a peculiarity in memory latency depending on the processors operating frequencies. All these differences come with challenging decisions on both application schedulability and processor operating frequencies. In addition, because of the small form factor of such embedded systems, these devices generally cannot afford active cooling systems. Therefore thermal mitigation relies on dynamic software solutions. Current operating systems for embedded systems such as Linux or Android do not consider all these particularities. As such, they often fail to satisfy user expectations of a powerful device with long battery life. To remedy this situation, this thesis proposes a unified approach to deliver high-performance and energy-efficiency computation in each of its flavours, considering the memory subsystem and all computation units available in the system. Performance is maximized even when the device is under heavy thermal constraints. The proposed unified solution is based on accurate models targeting both performance and thermal behaviour and resides at the operating systems kernel level to manage all running applications in a global manner. Particularly, the performance model considers both the computation part and also the memory subsystem of symmetric or asymmetric processors present in embedded devices. The thermal model relies on the accurate physical thermal properties of the device. Using these models, application schedulability and processor frequency scaling decisions to either maximize performance or energy efficiency within a thermal budget are extensively studied. To cover a large range of application behaviour, both models are built and designed using a generative workload that considers fine-grain details of the underlying microarchitecture of the SoC. Therefore, this approach can be derived and applied to multiple devices with little effort. Extended evaluation on real-world benchmarks for high performance and general computing, as well as common applications targeting the mobile and tablet market, show the accuracy and completeness of models used in this unified approach to deliver high performance and energy efficiency under high thermal constraints for embedded devices
Scalable and fault-tolerant data stream processing on multi-core architectures
With increasing data volumes and velocity, many applications are shifting from the classical “process-after-store” paradigm to a stream processing model: data is produced and consumed as continuous streams. Stream processing captures latency-sensitive applications as diverse as credit card fraud detection and high-frequency trading. These applications are expressed as queries of algebraic operations (e.g., aggregation) over the most recent data using windows, i.e., finite evolving views over the input streams. To guarantee correct results, streaming applications require precise window semantics (e.g., temporal ordering) for operations that maintain state.
While high processing throughput and low latency are performance desiderata for stateful streaming applications, achieving both poses challenges. Computing the state of overlapping windows causes redundant aggregation operations: incremental execution (i.e., reusing previous results) reduces latency but prevents parallelization; at the same time, parallelizing window execution for stateful operations with precise semantics demands ordering guarantees and state access coordination. Finally, streams and state must be recovered to produce consistent and repeatable results in the event of failures.
Given the rise of shared-memory multi-core CPU architectures and high-speed networking, we argue that it is possible to address these challenges in a single node without compromising window semantics, performance, or fault-tolerance. In this thesis, we analyze, design, and implement stream processing engines (SPEs) that achieve high performance on multi-core architectures. To this end, we introduce new approaches for in-memory processing that address the previous challenges: (i) for overlapping windows, we provide a family of window aggregation techniques that enable computation sharing based on the algebraic properties of aggregation functions; (ii) for parallel window execution, we balance parallelism and incremental execution by developing abstractions for both and combining them to a novel design; and (iii) for reliable single-node execution, we enable strong fault-tolerance guarantees without sacrificing performance by reducing the required disk I/O bandwidth using a novel persistence model. We combine the above to implement an SPE that processes hundreds of millions of tuples per second with sub-second latencies. These results reveal the opportunity to reduce resource and maintenance footprint by replacing cluster-based SPEs with single-node deployments.Open Acces
Towards Fast, Adaptive, and Hardware-Assisted User-Space Scheduling
Modern datacenter applications are prone to high tail latencies since their
requests typically follow highly-dispersive distributions. Delivering fast
interrupts is essential to reducing tail latency. Prior work has proposed both
OS- and system-level solutions to reduce tail latencies for microsecond-scale
workloads through better scheduling. Unfortunately, existing approaches like
customized dataplane OSes, require significant OS changes, experience
scalability limitations, or do not reach the full performance capabilities
hardware offers.
The emergence of new hardware features like UINTR exposed new opportunities
to rethink the design paradigms and abstractions of traditional scheduling
systems. We propose LibPreemptible, a preemptive user-level threading library
that is flexible, lightweight, and adaptive. LibPreemptible was built with a
set of optimizations like LibUtimer for scalability, and deadline-oriented API
for flexible policies, time-quantum controller for adaptiveness. Compared to
the prior state-of-the-art scheduling system Shinjuku, our system achieves
significant tail latency and throughput improvements for various workloads
without modifying the kernel. We also demonstrate the flexibility of
LibPreemptible across scheduling policies for real applications experiencing
varying load levels and characteristics.Comment: Accepted by HPCA202
Automatic generation of highly concurrent, hierarchical and heterogeneous cache coherence protocols from atomic specifications
Cache coherence protocols are often specified using only stable states and atomic transactions
for a single cache hierarchy level. Designing highly-concurrent, hierarchical and heterogeneous directory cache coherence protocols from these atomic specifications for modern
multicore architectures is a complicated task. To overcome these design challenges we have
developed the novel *Gen algorithms (ProtoGen, HieraGen and HeteroGen).
Using the *Gen
algorithms highly-concurrent, hierarchical and heterogeneous cache coherence protocols can
be automatically generated for a wide range of atomic input stable state protocol (SSP) speci fications - including the MOESI variants, as well as for protocols that are targeted towards
Total Store Order and Release Consistency. In addition, for each *Gen algorithm we have
developed and published an eponymous tool.
The ProtoGen tool takes as input a single SSP (i.e., no concurrency) generating the corresponding protocol for a multicore architecture with non-atomic transactions. The ProtoGen
algorithm automatically enforces the correct interleaving of conflicting coherence transactions
for a given atomic coherence protocol specification.
HieraGen is a tool for automatically generating hierarchical cache coherence protocols.
Its inputs are SSPs for each level of the hierarchy and its output is a highly concurrent
hierarchical protocol. HieraGen thus reduces the complexity that architects face by offloading
the challenging task of composing protocols and managing concurrency.
HeteroGen is a tool for automatically generating heterogeneous protocols that adhere to
precise consistency models. As input, HeteroGen takes SSPs of the per-cluster coherence
protocols, each of which satisfies its own per-cluster consistency model. The output is a
concurrent (i.e., with transient states) heterogeneous protocol that satisfies a precisely defined
consistency model that we refer to as a compound consistency model.
To validate the correctness of the *Gen algorithms, the generated output protocols were
verified for safety and deadlock freedom using a model checker. To verify the correctness
of protocols that need to adhere to a specific compound consistency model generated by
HeteroGen, novel litmus tests for multiple compound consistency models were developed.
The protocols automatically generated using the *Gen tools have a comparable or better
performance than manually generated cache coherence protocols, often discovering opportunities to reduce stalls. Thus, the *Gen tools reduce the complexity that architects face by
offloading the challenging tasks of composing protocols and managing concurrency
A Survey on Transactional Stream Processing
Transactional stream processing (TSP) strives to create a cohesive model that
merges the advantages of both transactional and stream-oriented guarantees.
Over the past decade, numerous endeavors have contributed to the evolution of
TSP solutions, uncovering similarities and distinctions among them. Despite
these advances, a universally accepted standard approach for integrating
transactional functionality with stream processing remains to be established.
Existing TSP solutions predominantly concentrate on specific application
characteristics and involve complex design trade-offs. This survey intends to
introduce TSP and present our perspective on its future progression. Our
primary goals are twofold: to provide insights into the diverse TSP
requirements and methodologies, and to inspire the design and development of
groundbreaking TSP systems
- …