1,042 research outputs found
Informed microarchitecture design space exploration using workload dynamics
Program runtime characteristics exhibit significant variation. As microprocessor architectures become more complex, their efficiency depends on the capability of adapting with workload dynamics. Moreover, with the approaching billion-transistor microprocessor era, it is not always economical or feasible to design processors with thermal cooling and reliability redundancy capabilities that target an application’s worst case scenario. Therefore, analyzing complex workload dynamics early, at the microarchitecture design stage, is crucial to forecast workload runtime behavior across architecture design alternatives and evaluate the efficiency of workload scenariobased architecture optimizations. Existing methods focus exclusively on predicting aggregated workload behavior. In this paper, we propose accurate and efficient techniques and models to reason about workload dynamics across the microarchitecture design space without using detailed cyclelevel simulations. Our proposed techniques employ waveletbased multiresolution decomposition and neural network based non-linear regression modeling. We extensively evaluate the efficiency of our predictive models in forecasting performance, power and reliability domain workload dynamics that the SPEC CPU 2000 benchmarks manifest on high-performance microprocessors with a microarchitecture design space that consists of 9 key parameters. Our results show that the models achieve high accuracy in revealing workload dynamic behavior across a large microarchitecture design space. We also demonstrate that the proposed techniques can be used to efficiently explore workload scenario-driven architecture optimizations. 1
Cross-Layer Approaches for an Aging-Aware Design of Nanoscale Microprocessors
Thanks to aggressive scaling of transistor dimensions, computers have revolutionized our life. However, the increasing unreliability of devices fabricated in nanoscale technologies emerged as a major threat for the future success of computers. In particular, accelerated transistor aging is of great importance, as it reduces the lifetime of digital systems. This thesis addresses this challenge by proposing new methods to model, analyze and mitigate aging at microarchitecture-level and above
Recommended from our members
Instruction history management for high-performance microprocessors
textHistory-driven dynamic optimization is an important factor in improving
instruction throughput in future high-performance microprocessors. Historybased
techniques have the ability to improve instruction-level parallelism by
breaking program dependencies, eliminating long-latency microarchitecture
operations, and improving prioritization within the microarchitecture. However,
a combination of factors, such as wider issue widths, smaller transistors,
larger die area, and increasing clock frequency, has led to microprocessors that
are sensitive to both wire delays and energy consumption. In this environment,
the global structures and long-distance communications that characterize current
history data management are limiting instruction throughput.
This dissertation proposes the ScatterFlow Framework for Instruction
History Management. Execution history management tasks, such as history
data storage, access, distribution, collection, and modification, are partitioned
and dispersed throughout the instruction execution pipeline. History data
packets are then associated with active instructions and flow with the instructions
as they execute, encountering the history management tasks along the
way. Between dynamic instances of the instructions, the history data packets
reside in trace-based history storage that is synchronized with the instruction
trace cache. Compared to traditional history data management, this ScatterFlow
method improves instruction coverage, increases history data access
bandwidth, shortens communication distances, improves history data accuracy
in many cases, and decreases the effective history data access time.
A comparison of general history management effectiveness between the
ScatterFlow Framework and traditional hardware tables shows that the ScatterFlow
Framework provides superior history maturity and instruction coverage.
The unique properties that arise due to trace-based history storage and
partitioned history management are analyzed, and novel design enhancements
are presented to increase the usefulness of instruction history data within the
ScatterFlow Framework.
To demonstrate the potential of the proposed framework, specific dynamic
optimization techniques are implemented using the ScatterFlow Framework.
These illustrative examples combine the history capture advantages
with the access latency improvements while exhibiting desirable dynamic energy
consumption properties. Compared to a traditional table-based predictor,
performing ScatterFlow value prediction improves execution time and reduces
dynamic energy consumption. In other detailed examples, ScatterFlowenabled
cluster assignment demonstrates improved execution time over previous
cluster assignment schemes, and ScatterFlow instruction-level profiling
detects more useful execution traits than traditional fixed-size and infinite-size
hardware tables.Electrical and Computer Engineerin
Thermal/performance trade-off in network-on-chip architectures
Multi-core architectures are a promising paradigm to exploit the huge integration density reached by high-performance systems. Indeed, integration density and technology scaling are causing undesirable operating temperatures, having net impact on reduced reliability and increased cooling costs. Dynamic Thermal Management (DTM) approaches have been proposed in literature to control temperature profile at run-time, while design-time approaches generally provide floorplan-driven solutions to cope with temperature constraints. Nevertheless, a suitable approach to collect performance, thermal and reliability metrics has not been proposed, yet. This work presents a novel methodology to jointly optimize temperature/performance trade-off in reliable high-performance parallel architectures with security constraints achieved by workload physical isolation on each core. The proposed methodology is based on a linear formal model relating temperature and duty-cycle on one side, and performance and duty-cycle on the other side. Extensive experimental results on real-world use-case scenarios show the goodness of the proposed model, suitable for design-time system-wide optimization to be used in conjunction with DTM technique
Exceeding Conservative Limits: A Consolidated Analysis on Modern Hardware Margins
Modern large-scale computing systems (data centers, supercomputers, cloud and
edge setups and high-end cyber-physical systems) employ heterogeneous
architectures that consist of multicore CPUs, general-purpose many-core GPUs,
and programmable FPGAs. The effective utilization of these architectures poses
several challenges, among which a primary one is power consumption. Voltage
reduction is one of the most efficient methods to reduce power consumption of a
chip. With the galloping adoption of hardware accelerators (i.e., GPUs and
FPGAs) in large datacenters and other large-scale computing infrastructures, a
comprehensive evaluation of the safe voltage reduction levels for each
different chip can be employed for efficient reduction of the total power. We
present a survey of recent studies in voltage margins reduction at the system
level for modern CPUs, GPUs and FPGAs. The pessimistic voltage guardbands
inserted by the silicon vendors can be exploited in all devices for significant
power savings. On average, voltage reduction can reach 12% in multicore CPUs,
20% in manycore GPUs and 39% in FPGAs.Comment: Accepted for publication in IEEE Transactions on Device and Materials
Reliabilit
- …