5 research outputs found
Workload Behavior Driven Memory Subsystem Design for Hyperscale
Hyperscalars run services across a large fleet of servers, serving billions
of users worldwide. These services, however, behave differently than commonly
available benchmark suites, resulting in server architectures that are not
optimized for cloud workloads. With datacenters becoming a primary server
processor market, optimizing server processors for cloud workloads by better
understanding their behavior has become crucial. To address this, in this
paper, we present MemProf, a memory profiler that profiles the three major
reasons for stalls in cloud workloads: code-fetch, memory bandwidth, and memory
latency. We use MemProf to understand the behavior of cloud workloads and
propose and evaluate micro-architectural and memory system design improvements
that help cloud workloads' performance.
MemProf's code analysis shows that cloud workloads execute the same code
across CPU cores. Using this, we propose shared micro-architectural
structures--a shared L2 I-TLB and a shared L2 cache. Next, to help with memory
bandwidth stalls, using workloads' memory bandwidth distribution, we find that
only a few pages contribute to most of the system bandwidth. We use this
finding to evaluate a new high-bandwidth, small-capacity memory tier and show
that it performs 1.46x better than the current baseline configuration. Finally,
we look into ways to improve memory latency for cloud workloads. Profiling
using MemProf reveals that L2 hardware prefetchers, a common solution to reduce
memory latency, have very low coverage and consume a significant amount of
memory bandwidth. To help improve hardware prefetcher performance, we built a
memory tracing tool to collect and validate production memory access traces
Snapshot: Fast, Userspace Crash Consistency for CXL and PM Using msync
Crash consistency using persistent memory programming libraries requires
programmers to use complex transactions and manual annotations. In contrast,
the failure-atomic msync() (FAMS) interface is much simpler as it transparently
tracks updates and guarantees that modified data is atomically durable on a
call to the failure-atomic variant of msync(). However, FAMS suffers from
several drawbacks, like the overhead of msync() and the write amplification
from page-level dirty data tracking.
To address these drawbacks while preserving the advantages of FAMS, we
propose Snapshot, an efficient userspace implementation of FAMS.
Snapshot uses compiler-based annotation to transparently track updates in
userspace and syncs them with the backing byte-addressable storage copy on a
call to msync(). By keeping a copy of application data in DRAM, Snapshot
improves access latency. Moreover, with automatic tracking and syncing changes
only on a call to msync(), Snapshot provides crash-consistency guarantees,
unlike the POSIX msync() system call.
For a KV-Store backed by Intel Optane running the YCSB benchmark, Snapshot
achieves at least 1.2 speedup over PMDK while significantly
outperforming conventional (non-crash-consistent) msync(). On an emulated CXL
memory semantic SSD, Snapshot outperforms PMDK by up to 10.9 on all but
one YCSB workload, where PMDK is 1.2 faster than Snapshot. Further,
Kyoto Cabinet commits perform up to 8.0 faster with Snapshot than its
built-in, msync()-based crash-consistency mechanism.Comment: A shorter version of this paper appeared in the Proceedings of ICCD
202