2 research outputs found
Workload Behavior Driven Memory Subsystem Design for Hyperscale
Hyperscalars run services across a large fleet of servers, serving billions
of users worldwide. These services, however, behave differently than commonly
available benchmark suites, resulting in server architectures that are not
optimized for cloud workloads. With datacenters becoming a primary server
processor market, optimizing server processors for cloud workloads by better
understanding their behavior has become crucial. To address this, in this
paper, we present MemProf, a memory profiler that profiles the three major
reasons for stalls in cloud workloads: code-fetch, memory bandwidth, and memory
latency. We use MemProf to understand the behavior of cloud workloads and
propose and evaluate micro-architectural and memory system design improvements
that help cloud workloads' performance.
MemProf's code analysis shows that cloud workloads execute the same code
across CPU cores. Using this, we propose shared micro-architectural
structures--a shared L2 I-TLB and a shared L2 cache. Next, to help with memory
bandwidth stalls, using workloads' memory bandwidth distribution, we find that
only a few pages contribute to most of the system bandwidth. We use this
finding to evaluate a new high-bandwidth, small-capacity memory tier and show
that it performs 1.46x better than the current baseline configuration. Finally,
we look into ways to improve memory latency for cloud workloads. Profiling
using MemProf reveals that L2 hardware prefetchers, a common solution to reduce
memory latency, have very low coverage and consume a significant amount of
memory bandwidth. To help improve hardware prefetcher performance, we built a
memory tracing tool to collect and validate production memory access traces
TPP: Transparent Page Placement for CXL-Enabled Tiered Memory
With increasing memory demands for datacenter applications and the emergence
of coherent interfaces like CXL that enable main memory expansion, we are about
to observe a wide adoption of tiered-memory subsystems in hyperscalers. In such
systems, main memory can constitute different memory technologies with varied
performance characteristics. In this paper, we characterize the memory usage of
a wide range of datacenter applications across the server fleet of a
hyperscaler (Meta) to get insights into an application's memory access patterns
and performance on a tiered memory system. Our characterizations show that
datacenter applications can benefit from tiered memory systems as there exist
opportunities for offloading colder pages to slower memory tiers. Without
efficient memory management, however, such systems can significantly degrade
performance.
We propose a novel OS-level application-transparent page placement mechanism
(TPP) for efficient memory management. TPP employs a lightweight mechanism to
identify and place hot and cold pages to appropriate memory tiers. It enables
page allocation to work independently from page reclamation logic that is,
otherwise, tightly coupled in today's Linux kernel. As a result, the local
memory tier has memory headroom for new allocations. At the same time, TPP can
promptly promote performance-critical hot pages trapped in the slow memory
tiers to the fast tier node. Both promotion and demotion mechanisms work
transparently without any prior knowledge of an application's memory access
behavior. We evaluate TPP with diverse workloads that consume significant
portions of DRAM on Meta's server fleet and are sensitive to memory subsystem
performance. TPP's efficient page placement improves Linux's performance by up
to 18%. TPP outperforms NUMA balancing and AutoTiering, state-of-the-art
solutions for tiered memory, by 10-17%