10 research outputs found
Chapter One – An Overview of Architecture-Level Power- and Energy-Efficient Design Techniques
Power dissipation and energy consumption became the primary design constraint for almost all computer systems in the last 15 years. Both computer architects and circuit designers intent to reduce power and energy (without a performance degradation) at all design levels, as it is currently the main obstacle to continue with further scaling according to Moore's law. The aim of this survey is to provide a comprehensive overview of power- and energy-efficient “state-of-the-art” techniques. We classify techniques by component where they apply to, which is the most natural way from a designer point of view. We further divide the techniques by the component of power/energy they optimize (static or dynamic), covering in that way complete low-power design flow at the architectural level. At the end, we conclude that only a holistic approach that assumes optimizations at all design levels can lead to significant savings.Peer ReviewedPostprint (published version
Bit Based Approximation for Approx-NoC: A Data Approximation Framework for Network-On-Chip Architectures
The dawn of the big data era has led to the inception of new and creative compute paradigms that utilize heterogeneity, specialization, processor-in-memory and approximation due to the high demand for memory bandwidth and power. Relaxing the constraints of applications has led to approximate computing being put forth as a feasible solution for high performance computation. The latest fad such as machine learning, video/image processing, data analytics, neural networks and other data intensive applications have heightened the possibility of using approximate computing as a feasible solution as these applications allow imprecise output within a specific error range.
This work presents Bit Based Approx-NoC, a hardware data approximation framework with a lightweight bit-based approximation technique for high performance NoCs. Bit-Based Approx-NoC facilitates approximate matching of data patterns, within a controllable error range, to compress them thereby reducing the data movement across the chip. The proposed work exploits the entropy between data words in order to increase their inherent compressibility. Evaluations in this work show on average 5% latency reduction and 14% throughput improvement compared to the state of the art NoC compression mechanisms
Dependable Computing on Inexact Hardware through Anomaly Detection.
Reliability of transistors is on the decline as transistors continue to shrink in size. Aggressive voltage scaling is making the problem even worse. Scaled-down transistors are more susceptible to transient faults as well as permanent in-field hardware failures. In order to continue to reap the benefits of technology scaling, it has become imperative to tackle the challenges risen due to the decreasing reliability of devices for the mainstream commodity market. Along with the worsening reliability, achieving energy efficiency and performance improvement by scaling is increasingly providing diminishing marginal returns. More than any other time in history, the semiconductor industry faces the crossroad of unreliability and the need to improve energy efficiency.
These challenges of technology scaling can be tackled by categorizing the target applications in the following two categories: traditional applications that have relatively strict correctness requirement on outputs and emerging class of soft applications, from various domains such as multimedia, machine learning, and computer vision, that are inherently inaccuracy tolerant to a certain degree. Traditional applications can be protected against hardware failures by low-cost detection and protection methods while soft applications can trade off quality of outputs to achieve better performance or energy efficiency.
For traditional applications, I propose an efficient, software-only application analysis and transformation solution to detect data and control flow transient faults. The intelligence of the data flow solution lies in the use of dynamic application information such as control flow, memory and value profiling. The control flow protection technique achieves its efficiency by simplifying signature calculations in each basic block and by performing checking at a coarse-grain level. For soft applications, I develop a quality control technique. The quality control technique employs continuous, light-weight checkers to ensure that the approximation is controlled and application output is acceptable. Overall, I show that the use of low-cost checkers to produce dependable results on commodity systems---constructed from inexact hardware components---is efficient and practical.PhDComputer Science and EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/113341/1/dskhudia_1.pd
Application Centric Networks-On-Chip Design Solutions for Future Multicore Systems
With advances in technology, future multicore systems scaled to 100s and 1000s of cores/accelerators are being touted as an effective solution for extracting huge performance gains using parallel programming paradigms. However with the failure of Dennard Scaling all the components on the chip cannot be run simultaneously without breaking the power and thermal constraints leading to strict chip power envelops. The scaling up of the number of on chip components has also brought upon Networks-On-Chip (NoC) based interconnect designs like 2D mesh. The contribution of NoC to the total on chip power and overall performance has been increasing steadily and hence high performance power-efficient NoC designs are becoming crucial.
Future multicore paradigms can be broadly classified, based on the applications they are tailored to, into traditional Chip Multi processor(CMP) based application based systems, characterized by low core and NoC utilization, and emerging big data application based systems, characterized by large amounts of data movement necessitating high throughput requirements. To this order, we propose NoC design solutions for power-savings in future CMPs tailored to traditional applications and higher effective throughput gains in multicore systems tailored to bandwidth intensive applications. First, we propose Fly-over, a light-weight distributed mechanism for power-gating routers attached to switched off cores to reduce NoC power consumption in low load CMP environment. Secondly, we plan on utilizing a promising next generation memory technology, Spin-Transfer Torque Magnetic RAM(STT-MRAM), to achieve enhanced NoC performance to satisfy the high throughput demands in emerging bandwidth intensive applications, while reducing the power consumption simultaneously. Thirdly, we present a hardware data approximation framework for NoCs, APPROX-NoC, with an online data error control mechanism, which can leverage the approximate computing paradigm in the emerging data intensive big data applications to attain higher performance per watt
Dynamic Orchestration of Massively Data Parallel Execution.
Graphics processing units (GPUs) are specialized hardware accelerators
capable of rendering graphics much faster than conventional
general-purpose processors. They are widely used in personal computers,
tablets, mobile phones, and game consoles. Modern GPUs are not only
efficient at manipulating computer graphics, but also are more effective
than CPUs for algorithms where processing of large data blocks can be done
in parallel. This is mainly due to their highly parallel architecture.
While GPUs provide low-cost and efficient
platforms for accelerating massively parallel applications, tedious
performance tuning is required to maximize application execution
efficiency. Achieving high performance requires the programmers to
manually manage the amount of on-chip memory used per thread, the total
number of threads per multiprocessor, the pattern of off-chip memory
accesses, etc.
In addition to a complex programming model, there is a lack of performance
portability across various systems with different runtime properties. Programmers usually make assumptions about
runtime properties when they write code and optimize that code based
on those assumptions. However, if any of these properties changes
during execution, the optimized code performs poorly. To alleviate these
limitations, several implementations of the application are needed to
maximize performance for different runtime properties. However, it
is not practical for the programmer to write several different versions of the
same code which are optimized for each individual runtime condition.
In this thesis, we propose a static and dynamic compiler framework to
take the burden of fine tuning different implementations of the same code
off the programmer. This framework enables the programmer to write the
program once and allow a static compiler to generate different versions of
a data parallel application with several tuning parameters. The runtime
system selects the best version and fine tunes its parameters based on
runtime properties such as device configuration, input size, dependency,
and data values.PhDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/108805/1/mehrzads_1.pd
Efficient runtime placement management for high performance and reliability in COTS FPGAs
Designing high-performance, fault-tolerant multisensory electronic systems for
hostile environments such as nuclear plants and outer space within the constraints of
cost, power and flexibility is challenging. Issues such as ionizing radiation, extreme
temperature and ageing can lead to faults in the electronics of these systems. In
addition, the remote nature of these environments demands a level of flexibility and
autonomy in their operations. The standard practice of using specially hardened
electronic devices for such systems is not only very expensive but also has limited
flexibility.
This thesis proposes novel techniques that promote the use of Commercial Off-The-
Shelf (COTS) reconfigurable devices to meet the challenges of high-performance
systems for hostile environments. Reconfigurable hardware such as Field
Programmable Gate Arrays (FPGA) have a unique combination of flexibility and
high performance. The flexibility offered through features such as dynamic partial
reconfiguration (DPR) can be harnessed not only to achieve cost-effective designs as
a smaller area can be used to execute multiple tasks, but also to improve the
reliability of a system as a circuit on one portion of the device can be physically
relocated to another portion in the case of fault occurrence. However, to harness
these potentials for high performance and reliability in a cost-effective manner, novel
runtime management tools are required. Most runtime support tools for
reconfigurable devices are based on ideal models which do not adequately consider
the limitations of realistic FPGAs, in particular modern FPGAs which are
increasingly heterogeneous. Specifically, these tools lack efficient mechanisms for
ensuring a high utilization of FPGA resources, including the FPGA area and the
configuration port and clocking resources, in a reliable manner.
To ensure high utilization of reconfigurable device area, placement management is a
key aspect of these tools. This thesis presents novel techniques for the management
of hardware task placement on COTS reconfigurable devices for high performance
and reliability. To this end, it addresses design-time issues that affect efficient
hardware task placement, with a focus on reliability. It also presents techniques to
maximize the utilization of the FPGA area in runtime, including techniques to
minimize fragmentation. Fragmentation leads to the creation of unusable areas due to
dynamic placement of tasks and the heterogeneity of the resources on the chip.
Moreover, this thesis also presents an efficient task reuse mechanism to improve the
availability of the internal configuration infrastructure of the FPGA for critical
responsibilities like error mitigation. The task reuse scheme, unlike previous
approaches, also improves the utilization of the chip area by offering
defragmentation.
Task relocation, which involves changing the physical location of circuits is a
technique for error mitigation and high performance. Hence, this thesis also provides
a functionality-based relocation mechanism for improving the number of locations to
which tasks can be relocated on heterogeneous FPGAs. As tasks are relocated, clock
networks need to be routed to them. As such, a reliability-aware technique of clock
network routing to tasks after placement is also proposed.
Finally, this thesis offers a prototype implementation and characterization of a
placement management system (PMS) which is an integration of the aforementioned
techniques. The performance of most of the proposed techniques are tested using
data processing tasks of a NASA JPL spectrometer application. The results show that
the proposed techniques have potentials to improve the reliability and performance of
applications in hostile environment compared to state-of-the-art techniques. The task
optimization technique presented leads to better capacity to circumvent permanent
faults on COTS FPGAs compared to state-of-the-art approaches (48.6% more errors
were circumvented for the JPL spectrometer application). The proposed task reuse
scheme leads to approximately 29% saving in the amount of configuration time. This
frees up the internal configuration interface for more error mitigation operations. In
addition, the proposed PMS has a worst-case latency of less than 50% of that of state-of-
the-art runtime placement systems, while maintaining the same level of placement
quality and resource overhead