9 research outputs found
Programming Abstractions for Data Locality
The goal of the workshop and this report is to identify common themes and standardize concepts for locality-preserving abstractions for exascale programming models. Current software tools are built on the premise that computing is the most expensive component, we are rapidly moving to an era that computing is cheap and massively parallel while data movement dominates energy and performance costs. In order to respond to exascale systems (the next generation of high performance computing systems), the scientific computing community needs to refactor their applications to align with the emerging data-centric paradigm. Our applications must be evolved to express information about data locality. Unfortunately current programming environments offer few ways to do so. They ignore the incurred cost of communication and simply rely on the hardware cache coherency to virtualize data movement. With the increasing importance of task-level parallelism on future systems, task models have to support constructs that express data locality and affinity. At the system level, communication libraries implicitly assume all the processing elements are equidistant to each other. In order to take advantage of emerging technologies, application developers need a set of programming abstractions to describe data locality for the new computing ecosystem. The new programming paradigm should be more data centric and allow to describe how to decompose and how to layout data in the memory.Fortunately, there are many emerging concepts such as constructs for tiling, data layout, array views, task and thread affinity, and topology aware communication libraries for managing data locality. There is an opportunity to identify commonalities in strategy to enable us to combine the best of these concepts to develop a comprehensive approach to expressing and managing data locality on exascale programming systems. These programming model abstractions can expose crucial information about data locality to the compiler and runtime system to enable performance-portable code. The research question is to identify the right level of abstraction, which includes techniques that range from template libraries all the way to completely new languages to achieve this goal
Programming models for speculative and optimistic parallelism based on algorithmic properties
Today's hardware is becoming more and more parallel. While embarrassingly parallel codes, such as high-performance computing ones, can readily take advantage of this increased number of cores, most other types of code cannot easily scale using traditional data and/or task parallelism and cores are therefore left idling resulting in lost opportunities to improve performance. The opportunistic computing paradigm, on which this thesis rests, is the idea that computations should dynamically adapt to and exploit the opportunities that arise due to idling resources to enhance their performance or quality.
In this thesis, I propose to utilize algorithmic properties to develop programming models that leverage this idea thereby providing models that increase and improve the parallelism that can be exploited. I exploit three distinct algorithmic properties: i) algorithmic diversity, ii) the semantic content of data-structures, and iii) the variable nature of results in certain applications.
This thesis presents three main contributions: i) the N-way model which leverages algorithmic diversity to speed up hitherto sequential code, ii) an extension to the N-way model which opportunistically improves the quality of computations and iii) a framework allowing the programmer to specify the semantics of data-structures to improve the performance of optimistic parallelism.PhDCommittee Chair: Pande, Santosh; Committee Member: Kim, Hyesoon; Committee Member: Ramachandran, Umakishore; Committee Member: Schwan, Karsten; Committee Member: Yalamanchili, Sudhaka
Statistically Analyzing Execution Variance for Soft Real-Time Applications
Abstract. Certain high-performance applications like multimedia and gaming have performance requirements beyond reducing program execu-tion time. These applications have repetitive components whose desired performance characteristics are more naturally expressed using soft real-time theory with its probabilistic guarantees. However, for large com-plex gaming and multimedia applications, programmers typically avoid real-time constructs as they significantly constrain how the program-mer can express functionality. Instead, such applications are developed as monolithic programs using conventional languages like C/C++. Here the soft real-time behavior of the application becomes an emergent qual-ity rather than being enforced by design. Programmers must then tweak parameters/algorithms until the application’s soft real-time behavior be-comes acceptable. There are currently no analysis techniques that di-rectly extract the soft real-time execution characteristics of monolithic applications written without the use of real-time constructs. We intro-duce a domain-agnostic profiling methodology that identifies program execution-contexts whose variant behavior most significantly affects the soft real-time characteristics of the application.
Runnemede: An Architecture for Ubiquitous High-Performance Computing
DARPA’s Ubiquitous High-Performance Computing (UHPC) program asked researchers to develop computing systems capable of achieving energy efficiencies of 50 GOPS/Watt, assuming 2018-era fabrication technologies. This paper describes Runnemede, the research architecture developed by the Intel-led UHPC team. Runnemede is being developed through a co-design process that considers the hardware, the runtime/OS, and applications simultaneously. Near-threshold voltage operation, fine-grained power and clock management, and separate execution units for runtime and application code are used to reduce energy consumption. Memory energy is minimized through application-managed on-chip memory and direct physical addressing. A hierarchical on-chip network reduces communication energy, and a codelet-based execution model supports extreme parallelism and fine-grained tasks. We present an initial evaluation of Runnemede that shows the design process for our on-chip network, demonstrates 2-4x improvements in memory energy from explicit control of on-chip memory, and illustrates the impact of hardware-software co-design on the energy consumption of a synthetic aperture radar algorithm on our architecture. 1