55,863 research outputs found
Parallel software caches
We investigate the construction and application of parallel software caches in shared memory multiprocessors. In contrast to maintaining a private cache for each thread, a parallel cache allows the re-use of results of lengthy computations by other threads. This is especially important in irregular applications where the re-use of intermediate results by scheduling is not possible. Example applications are the computation of intersections between a scanline and a polygon in computational geometry, and the computation of intersections between rays and objects in ray tracing. A parallel software cache is based on a readers/writers lock, i.e. as long as no thread alters the cache data structure, multiple threads may read simultaneously. If a thread wants to alter the cache because of a cache miss, it waits until all other threads have left the data structure, then it can update the contents of the cache. Other threads can access the cache only after the writer has finished its work. To increase utilization, the cache has a number of slots that can be locked separately. We investigate the tradeoff between slot size, search time in the cache, and the time to re-compute a cache entry. Another major difference between sequential and parallel software caches is the replacement strategy. We adapt classic replacement strategies such as LRU and random replacement for parallel caches. As execution platform, we use the SB-PRAM, but the concepts might be portable to machines such as NYU Ultracomputer, Tera MTA, and Stanford DASH
Well-Structured Futures and Cache Locality
In fork-join parallelism, a sequential program is split into a directed
acyclic graph of tasks linked by directed dependency edges, and the tasks are
executed, possibly in parallel, in an order consistent with their dependencies.
A popular and effective way to extend fork-join parallelism is to allow threads
to create futures. A thread creates a future to hold the results of a
computation, which may or may not be executed in parallel. That result is
returned when some thread touches that future, blocking if necessary until the
result is ready.
Recent research has shown that while futures can, of course, enhance
parallelism in a structured way, they can have a deleterious effect on cache
locality. In the worst case, futures can incur deviations, which implies
additional cache misses, where is the number of cache lines, is the
number of processors, is the number of touches, and is the
\emph{computation span}. Since cache locality has a large impact on software
performance on modern multicores, this result is troubling.
In this paper, however, we show that if futures are used in a simple,
disciplined way, then the situation is much better: if each future is touched
only once, either by the thread that created it, or by a thread to which the
future has been passed from the thread that created it, then parallel
executions with work stealing can incur at most additional
cache misses, a substantial improvement. This structured use of futures is
characteristic of many (but not all) parallel applications
Using fast and accurate simulation to explore hardware/software trade-offs in the multi-core era
Writing well-performing parallel programs is challenging in the multi-core processor era. In addition to achieving good per-thread performance, which in itself is a balancing act between instruction-level parallelism, pipeline effects and good memory performance, multi-threaded programs complicate matters even further. These programs require synchronization, and are affected by the interactions between threads through sharing of both processor resources and the cache hierarchy.
At the Intel Exascience Lab, we are developing an architectural simulator called Sniper for simulating future exascale-era multi-core processors. Its goal is twofold: Sniper should assist hardware designers to make design decisions, while simultaneously providing software designers with a tool to gain insight into the behavior of their algorithms and allow for optimization. By taking architectural features into account, our simulator can provide more insight into parallel programs than what can be obtained from existing performance analysis tools. This unique combination of hardware simulator and software performance analysis tool makes Sniper a useful tool for a simultaneous exploration of the hardware and software design space for future high-performance multi-core systems
Empirical Evaluation of the Parallel Distribution Sweeping Framework on Multicore Architectures
In this paper, we perform an empirical evaluation of the Parallel External
Memory (PEM) model in the context of geometric problems. In particular, we
implement the parallel distribution sweeping framework of Ajwani, Sitchinava
and Zeh to solve batched 1-dimensional stabbing max problem. While modern
processors consist of sophisticated memory systems (multiple levels of caches,
set associativity, TLB, prefetching), we empirically show that algorithms
designed in simple models, that focus on minimizing the I/O transfers between
shared memory and single level cache, can lead to efficient software on current
multicore architectures. Our implementation exhibits significantly fewer
accesses to slow DRAM and, therefore, outperforms traditional approaches based
on plane sweep and two-way divide and conquer.Comment: Longer version of ESA'13 pape
Jurisdiction Under the Sherman Act: The “Interstate Commerce” Element and the Activities of Local Real Estate Boards and Brokers
To increase performance, modern processors employ complex techniques such as out-of-order pipelines and deep cache hierarchies. While the increasing complexity has paid off in performance, it has become harder to accurately predict the effects of hardware/software optimizations in such systems. Traditional microarchitectural simulators typically execute code 10 000×–100 000× slower than native execution, which leads to three problems: First, high simulation overhead makes it hard to use microarchitectural simulators for tasks such as software optimizations where rapid turn-around is required. Second, when multiple cores share the memory system, the resulting performance is sensitive to how memory accesses from the different cores interleave. This requires that applications are simulated multiple times with different interleaving to estimate their performance distribution, which is rarely feasible with today's simulators. Third, the high overhead limits the size of the applications that can be studied. This is usually solved by only simulating a relatively small number of instructions near the start of an application, with the risk of reporting unrepresentative results. In this thesis we demonstrate three strategies to accurately model multicore processors without the overhead of traditional simulation. First, we show how microarchitecture-independent memory access profiles can be used to drive automatic cache optimizations and to qualitatively classify an application's last-level cache behavior. Second, we demonstrate how high-level performance profiles, that can be measured on existing hardware, can be used to model the behavior of a shared cache. Unlike previous models, we predict the effective amount of cache available to each application and the resulting performance distribution due to different interleaving without requiring a processor model. Third, in order to model future systems, we build an efficient sampling simulator. By using native execution to fast-forward between samples, we reach new samples much faster than a single sample can be simulated. This enables us to simulate multiple samples in parallel, resulting in almost linear scalability and a maximum simulation rate close to native execution.CoDeR-MPUPMAR
Performance Modeling of Inline Compression With Software Caching for Reducing the Memory Footprint in PYSDC
Modern HPC applications compute and analyze massive amounts of data. The data volume is growing faster than memory capabilities and storage improvements leading to performance bottlenecks. An example of this is pySDC, a framework for solving collocation problems iteratively using parallel-in-time methods. These methods require storing and exchanging 3D volume data for each parallel point in time. If a simulation consists of M parallel-in-time stages, where the full spatial problem has to be stored for the next iteration, the memory demand for a single state variable is M ×Nx ×Ny ×Nz per time-step. For an application simulation with many state variables or stages, the memory requirement is considerable. Data compression helps alleviate the overhead in memory by reducing the size of data and keeping it in compressed format. Inline compression compresses and decompresses the application’s working set as it moves in and out of main memory. Thus, it provides the system with the appearance of more main memory. Naive compressed arrays require a compression or decompression operation for each store or load and therefore hurt the performance of the application. By incorporating a software cache and storing decompressed values of the array, we limit the number of compression and decompression operations for the stores and loads, thereby improving performance overall. In this thesis, we build a compression manager and software cache manager for the pySDC framework to reduce the memory requirements and computational overhead. The compression manager wraps around LibPressio, a C++ compression library that abstracts all compressors. We utilize blosc, a lossless compressor for our compression manager, and build a software cache manager with various cache configurations and cache policies to work in cohesion with the compression manager. We build a performance model which evaluates the compression manager and cache manager’s performance on different metrics such as compression ratio and compression/decompression time. We test our framework on two different pySDC applications — e.g., Allen-Cahn and Heat-diffusion. ii Results show that incorporating compression and increasing the cache size for our applications inflates the total compressed size in bytes for the arrays and therefore reduces the compression ratio, in contrast to our expectations. However, incorporating the cache and a greater cache size reduces the number of compression/decompression calls to LibPressio as well as cache evictions, significantly reducing the computational overhead for pySDC. Thus, overall, our compression and cache manager help reduce the memory footprint in pySDC. Future work involves looking at improving the compression ratio and using lossy compression to achieve significant reduction in memory footprint
CacheZoom: How SGX Amplifies The Power of Cache Attacks
In modern computing environments, hardware resources are commonly shared, and
parallel computation is widely used. Parallel tasks can cause privacy and
security problems if proper isolation is not enforced. Intel proposed SGX to
create a trusted execution environment within the processor. SGX relies on the
hardware, and claims runtime protection even if the OS and other software
components are malicious. However, SGX disregards side-channel attacks. We
introduce a powerful cache side-channel attack that provides system adversaries
a high resolution channel. Our attack tool named CacheZoom is able to virtually
track all memory accesses of SGX enclaves with high spatial and temporal
precision. As proof of concept, we demonstrate AES key recovery attacks on
commonly used implementations including those that were believed to be
resistant in previous scenarios. Our results show that SGX cannot protect
critical data sensitive computations, and efficient AES key recovery is
possible in a practical environment. In contrast to previous works which
require hundreds of measurements, this is the first cache side-channel attack
on a real system that can recover AES keys with a minimal number of
measurements. We can successfully recover AES keys from T-Table based
implementations with as few as ten measurements.Comment: Accepted at Conference on Cryptographic Hardware and Embedded Systems
(CHES '17
Caching and Writeback Policies in Parallel File Systems
Improvements in the processing speed of multiprocessors are outpacing improvements in the speed of disk hardware. Parallel disk I/O subsystems have been proposed as one way to close the gap between processor and disk speeds. Such parallel disk systems require parallel file system software to avoid performance-limiting bottlenecks. We discuss cache management techniques that can be used in a parallel file system implementation. We examine several writeback policies, and give results of experiments that test their performance
- …