17 research outputs found
レイテンシ耐性を持つベクトルプロセッサアーキテクチャに関する研究
Tohoku University博士(情報科学)thesi
DAMOV: A New Methodology and Benchmark Suite for Evaluating Data Movement Bottlenecks
Data movement between the CPU and main memory is a first-order obstacle
against improving performance, scalability, and energy efficiency in modern
systems. Computer systems employ a range of techniques to reduce overheads tied
to data movement, spanning from traditional mechanisms (e.g., deep multi-level
cache hierarchies, aggressive hardware prefetchers) to emerging techniques such
as Near-Data Processing (NDP), where some computation is moved close to memory.
Our goal is to methodically identify potential sources of data movement over a
broad set of applications and to comprehensively compare traditional
compute-centric data movement mitigation techniques to more memory-centric
techniques, thereby developing a rigorous understanding of the best techniques
to mitigate each source of data movement.
With this goal in mind, we perform the first large-scale characterization of
a wide variety of applications, across a wide range of application domains, to
identify fundamental program properties that lead to data movement to/from main
memory. We develop the first systematic methodology to classify applications
based on the sources contributing to data movement bottlenecks. From our
large-scale characterization of 77K functions across 345 applications, we
select 144 functions to form the first open-source benchmark suite (DAMOV) for
main memory data movement studies. We select a diverse range of functions that
(1) represent different types of data movement bottlenecks, and (2) come from a
wide range of application domains. Using NDP as a case study, we identify new
insights about the different data movement bottlenecks and use these insights
to determine the most suitable data movement mitigation mechanism for a
particular application. We open-source DAMOV and the complete source code for
our new characterization methodology at https://github.com/CMU-SAFARI/DAMOV.Comment: Our open source software is available at
https://github.com/CMU-SAFARI/DAMO
Complementing user-level coarse-grain parallelism with implicit speculative parallelism
Multi-core and many-core systems are the norm in contemporary processor technology
and are expected to remain so for the foreseeable future. Parallel programming
is, thus, here to stay and programmers have to endorse it if they are to exploit such
systems for their applications. Programs using parallel programming primitives like
PThreads or OpenMP often exploit coarse-grain parallelism, because it offers a good
trade-off between programming effort versus performance gain. Some parallel applications
show limited or no scaling beyond a number of cores. Given the abundant
number of cores expected in future many-cores, several cores would remain idle in such
cases while execution performance stagnates. This thesis proposes using cores that do
not contribute to performance improvement for running implicit fine-grain speculative
threads. In particular, we present a many-core architecture and protocols that allow
applications with coarse-grain explicit parallelism to further exploit implicit speculative
parallelism within each thread. We show that complementing parallel programs
with implicit speculative mechanisms offers significant performance improvements for
a large and diverse set of parallel benchmarks. Implicit speculative parallelism frees
the programmer from the additional effort to explicitly partition the work into finer
and properly synchronized tasks. Our results show that, for a many-core comprising
128 cores supporting implicit speculative parallelism in clusters of 2 or 4 cores, performance
improves on top of the highest scalability point by 44% on average for the
4-core cluster and by 31% on average for the 2-core cluster. We also show that this
approach often leads to better performance and energy efficiency compared to existing
alternatives such as Core Fusion and Turbo Boost. Moreover, we present a dynamic
mechanism to choose the number of explicit and implicit threads, which performs
within 6% of the static oracle selection of threads.
To improve energy efficiency processors allow for Dynamic Voltage and Frequency
Scaling (DVFS), which enables changing their performance and power consumption
on-the-fly. We evaluate the amenability of the proposed explicit plus implicit threads
scheme to traditional power management techniques for multithreaded applications
and identify room for improvement. We thus augment prior schemes and introduce
a novel multithreaded power management scheme that accounts for implicit threads
and aims to minimize the Energy Delay2 product (ED2). Our scheme comprises two
components: a “local” component that tries to adapt to the different program phases
on a per explicit thread basis, taking into account implicit thread behavior, and a
“global” component that augments the local components with information regarding
inter-thread synchronization. Experimental results show a reduction of ED2 of 8%
compared to having no power management, with an average reduction in power of
15% that comes at a minimal loss of performance of less than 3% on average
Putting checkpoints to work in thread level speculative execution
With the advent of Chip Multi Processors (CMPs), improving performance relies on
the programmers/compilers to expose thread level parallelism to the underlying hardware.
Unfortunately, this is a difficult and error-prone process for the programmers,
while state of the art compiler techniques are unable to provide significant benefits
for many classes of applications. An interesting alternative is offered by systems that
support Thread Level Speculation (TLS), which relieve the programmer and compiler
from checking for thread dependencies and instead use the hardware to enforce them.
Unfortunately, data misspeculation results in a high cost since all the intermediate
results have to be discarded and threads have to roll back to the beginning of the
speculative task. For this reason intermediate checkpointing of the state of the TLS
threads has been proposed. When the violation does occur, we now have to roll back
to a checkpoint before the violating instruction and not to the start of the task. However,
previous work omits study of the microarchitectural details and implementation
issues that are essential for effective checkpointing. Further, checkpoints have only
been proposed and evaluated for a narrow class of benchmarks.
This thesis studies checkpoints on a state of the art TLS system running a variety
of benchmarks. The mechanisms required for checkpointing and the costs associated
are described. Hardware modifications required for making checkpointed execution
efficient in time and power are proposed and evaluated. Further, the need for accurately
identifying suitable points for placing checkpoints is established. Various techniques
for identifying these points are analysed in terms of both effectiveness and viability.
This includes an extensive evaluation of data dependence prediction techniques. The
results show that checkpointing thread level speculative execution results in consistent
power savings, and for many benchmarks leads to speedups as well
A Modern Primer on Processing in Memory
Modern computing systems are overwhelmingly designed to move data to
computation. This design choice goes directly against at least three key trends
in computing that cause performance, scalability and energy bottlenecks: (1)
data access is a key bottleneck as many important applications are increasingly
data-intensive, and memory bandwidth and energy do not scale well, (2) energy
consumption is a key limiter in almost all computing platforms, especially
server and mobile systems, (3) data movement, especially off-chip to on-chip,
is very expensive in terms of bandwidth, energy and latency, much more so than
computation. These trends are especially severely-felt in the data-intensive
server and energy-constrained mobile systems of today. At the same time,
conventional memory technology is facing many technology scaling challenges in
terms of reliability, energy, and performance. As a result, memory system
architects are open to organizing memory in different ways and making it more
intelligent, at the expense of higher cost. The emergence of 3D-stacked memory
plus logic, the adoption of error correcting codes inside the latest DRAM
chips, proliferation of different main memory standards and chips, specialized
for different purposes (e.g., graphics, low-power, high bandwidth, low
latency), and the necessity of designing new solutions to serious reliability
and security issues, such as the RowHammer phenomenon, are an evidence of this
trend. This chapter discusses recent research that aims to practically enable
computation close to data, an approach we call processing-in-memory (PIM). PIM
places computation mechanisms in or near where the data is stored (i.e., inside
the memory chips, in the logic layer of 3D-stacked memory, or in the memory
controllers), so that data movement between the computation units and memory is
reduced or eliminated.Comment: arXiv admin note: substantial text overlap with arXiv:1903.0398