26 research outputs found
Automatic Design of Efficient Application-centric Architectures.
As the market for embedded devices continues to grow, the demand for high
performance, low cost, and low power computation grows as well. Many embedded
applications perform computationally intensive tasks such as processing streaming
video or audio, wireless communication, or speech recognition and must be
implemented within tight power budgets. Typically, general
purpose processors are not able to meet these performance and power requirements.
Custom hardware in the form of loop accelerators are often used to execute the
compute-intensive portions of these applications because they can achieve significantly
higher levels of performance and power efficiency.
Automated hardware synthesis from high level specifications is a key technology
used in designing these accelerators, because the resulting hardware is correct by
construction, easing verification and greatly decreasing time-to-market in the quickly
evolving embedded domain. In this dissertation, a compiler-directed approach is used
to design a loop accelerator from a C specification and a throughput requirement. The
compiler analyzes the loop and generates a virtual architecture containing sufficient
resources to sustain the required throughput. Next, a software pipelining scheduler
maps the operations in the loop to the virtual architecture. Finally, the accelerator
datapath is derived from the resulting schedule.
In this dissertation, synthesis of different types of loop accelerators is investigated.
First, the system for synthesizing single loop accelerators is detailed. In particular, a
scheduler is presented that is aware of the effects of its decisions on the resulting hardware,
and attempts to minimize hardware cost. Second, synthesis of multifunction
loop accelerators, or accelerators capable of executing multiple loops, is presented.
Such accelerators exploit coarse-grained hardware sharing across loops in order to reduce
overall cost. Finally, synthesis of post-programmable accelerators is presented,
allowing changes to be made to the software after an accelerator has been created.
The tradeoffs between the flexibility, cost, and energy efficiency of these different
types of accelerators are investigated. Automatically synthesized loop accelerators
are capable of achieving order-of-magnitude gains in performance, area efficiency,
and power efficiency over processors, and programmable accelerators allow software
changes while maintaining highly efficient levels of computation.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/61644/1/fank_1.pd
SPICE²: A Spatial, Parallel Architecture for Accelerating the Spice Circuit Simulator
Spatial processing of sparse, irregular floating-point computation using a single FPGA enables up to an order of magnitude speedup (mean 2.8X speedup) over a conventional microprocessor for the SPICE circuit simulator. We deliver this speedup using a hybrid parallel architecture that spatially implements the heterogeneous forms of parallelism available in SPICE. We decompose SPICE into its three constituent phases: Model-Evaluation, Sparse Matrix-Solve, and Iteration Control and parallelize each phase independently. We exploit data-parallel device evaluations in the Model-Evaluation phase, sparse dataflow parallelism in the Sparse Matrix-Solve phase and compose the complete design in streaming fashion. We name our parallel architecture SPICE²: Spatial Processors Interconnected for Concurrent Execution for accelerating the SPICE circuit simulator. We program the parallel architecture with a high-level, domain-specific framework that identifies, exposes and exploits parallelism available in the SPICE circuit simulator. This design is optimized with an auto-tuner that can scale the design to use larger FPGA capacities without expert intervention and can even target other parallel architectures with the assistance of automated code-generation. This FPGA architecture is able to outperform conventional processors due to a combination of factors including high utilization of statically-scheduled resources, low-overhead dataflow scheduling of fine-grained tasks, and overlapped processing of the control algorithms.
We demonstrate that we can independently accelerate Model-Evaluation by a mean factor of 6.5X(1.4--23X) across a range of non-linear device models and Matrix-Solve by 2.4X(0.6--13X) across various benchmark matrices while delivering a mean combined speedup of 2.8X(0.2--11X) for the two together when comparing a Xilinx Virtex-6 LX760 (40nm) with an Intel Core i7 965 (45nm). With our high-level framework, we can also accelerate Single-Precision Model-Evaluation on NVIDIA GPUs, ATI GPUs, IBM Cell, and Sun Niagara 2 architectures.
We expect approaches based on exploiting spatial parallelism to become important as frequency scaling slows down and modern processing architectures turn to parallelism (\eg multi-core, GPUs) due to constraints of power consumption. This thesis shows how to express, exploit and optimize spatial parallelism for an important class of problems that are challenging to parallelize.</p
Cooperative Data and Computation Partitioning for Decentralized Architectures.
Scalability of future wide-issue processor designs is severely hampered by the
use of centralized resources such as register files, memories and interconnect
networks. While the use of centralized resources eases both hardware design and
compiler code generation efforts, they can become performance bottlenecks as
access latencies increase with larger designs. The natural solution to this
problem is to adapt the architecture to use smaller, decentralized resources.
Decentralized architectures use smaller, faster components and exploit
distributed instruction-level parallelism across the resources. A multicluster
architecture is an example of such a decentralized processor, where subsets of
smaller register files, functional units, and memories are grouped together in a
tightly coupled unit, forming a cluster. These clusters can then be replicated
and connected together to form a scalable, high-performance architecture.
The main difficulty with decentralized architectures resides in compiler code
generation. In a centralized Very Long Instruction Word (VLIW) processor, the
compiler must statically schedule each operation to both a functional unit and a
time slot for execution. In contrast, for a decentralized multicluster VLIW,
the compiler must consider the additional effects of cluster assignment,
recognizing that communication between clusters will result in a delay penalty.
In addition, if the multicluster processor also has partitioned data memories,
the compiler has the additional task of assigning data objects to their
respective memories. Each decision, of cluster, functional unit, memory, and
time slot, are highly interrelated and can have dramatic effects on the best
choice for every other decision.
This dissertation addresses the issues of extracting and exploiting inherent
parallelism across decentralized resources through compiler analysis and code
generation techniques. First, a static analysis technique to partition data
objects is presented, which maps data objects to decentralized scratchpad
memories. Second, an alternative profile-guided technique for memory
partitioning is presented which can effectively map data access operations onto
distributed caches. Finally, a detailed, resource-aware partitioning algorithm
is presented which can effectively split computation operations of an
application across a set of processing elements. These partitioners work in
tandem to create a high-performance partition assignment of both memory and
computation operations for decentralized multicluster or multicore processors.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/57649/2/mchu_1.pd
Efficient design-space exploration of custom instruction-set extensions
Customization of processors with instruction set extensions (ISEs) is a technique
that improves performance through parallelization with a reasonable area overhead,
in exchange for additional design effort. This thesis presents a collection of
novel techniques that reduce the design effort and cost of generating ISEs by advancing
automation and reconfigurability. In addition, these techniques maximize
the perfomance gained as a function of the additional commited resources.
Including ISEs into a processor design implies development at many levels.
Most prior works on ISEs solve separate stages of the design: identification,
selection, and implementation. However, the interations between these stages
also hold important design trade-offs. In particular, this thesis addresses the lack
of interaction between the hardware implementation stage and the two previous
stages. Interaction with the implementation stage has been mostly limited to
accurately measuring the area and timing requirements of the implementation
of each ISE candidate as a separate hardware module. However, the need to
independently generate a hardware datapath for each ISE limits the flexibility
of the design and the performance gains. Hence, resource sharing is essential in
order to create a customized unit with multi-function capabilities.
Previously proposed resource-sharing techniques aggressively share resources
amongst the ISEs, thus minimizing the area of the solution at any cost. However,
it is shown that aggressively sharing resources leads to large ISE datapath latency.
Thus, this thesis presents an original heuristic that can be parameterized
in order to control the degree of resource sharing amongst a given set of ISEs,
thereby permitting the exploration of the existing implementation trade-offs between
instruction latency and area savings. In addition, this thesis introduces an
innovative predictive model that is able to quickly expose the optimal trade-offs of this design space. Compared to an exhaustive exploration of the design space,
the predictive model is shown to reduce by two orders of magnitude the number
of executions of the resource-sharing algorithm that are required in order to find
the optimal trade-offs.
This thesis presents a technique that is the first one to combine the design
spaces of ISE selection and resource sharing in ISE datapath synthesis, in order
to offer the designer solutions that achieve maximum speedup and maximum
resource utilization using the available area. Optimal trade-offs in the design
space are found by guiding the selection process to favour ISE combinations that
are likely to share resources with low speedup losses. Experimental results show
that this combined approach unveils new trade-offs between speedup and area
that are not identified by previous selection techniques; speedups of up to 238%
over previous selection thecniques were obtained.
Finally, multi-cycle ISEs can be pipelined in order to increase their throughput.
However, it is shown that traditional ISE identification techniques do not
allow this optimization due to control flow overhead. In order to obtain the benefits
of overlapping loop executions, this thesis proposes to carefully insert loop
control flow statements into the ISEs, thus allowing the ISE to control the iterations
of the loop. The proposed ISEs broaden the scope of instruction-level
parallelism and obtain higher speedups compared to traditional ISEs, primarily
through pipelining, the exploitation of spatial parallelism, and reducing the
overhead of control flow statements and branches. A detailed case study of a
real application shows that the proposed method achieves 91% higher speedups
than the state-of-the-art, with an area overhead of less than 8% in hardware
implementation
Compilation techniques for short-vector instructions
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.Includes bibliographical references (p. 127-133).Multimedia extensions are nearly ubiquitous in today's general-purpose processors. These extensions consist primarily of a set of short-vector instructions that apply the same opcode to a vector of operands. This design introduces a data-parallel component to processors that exploit instruction-level parallelism, and presents an opportunity for increased performance. In fact, ignoring a processor's vector opcodes can leave a significant portion of the available resources unused. In order for software developers to find short-vector instructions generally useful, the compiler must target these extensions with complete transparency and consistent performance. This thesis develops compiler techniques to target short-vector instructions automatically and efficiently. One important aspect of compilation is the effective management of memory alignment. As with scalar loads and stores, vector references are typically more efficient when accessing aligned regions. In many cases, the compiler can glean no alignment information and must emit conservative code sequences. In response, I introduce a range of compiler techniques for detecting and enforcing aligned references. In my benchmark suite, the most practical method ensures alignment for roughly 75% of dynamic memory references.(cont.) This thesis also introduces selective vectorization, a technique for balancing computation across a processor's scalar and vector resources. Current approaches for targeting short-vector instructions directly adopt vectorizing technology first developed for supercomputers. Traditional vectorization, however, can lead to a performance degradation since it fails to account for a processor's scalar execution resources. I formulate selective vectorization in the context of software pipelining. My approach creates software pipelines with shorter initiation intervals, and therefore, higher performance. In contrast to conventional methods, selective vectorization operates on a low-level intermediate representation. This technique allows the algorithm to accurately measure the performance trade-offs of code selection alternatives. A key aspect of selective vectorization is its ability to manage communication of operands between vector and scalar instructions. Even when operand transfer is expensive, the technique is sufficiently sophisticated to achieve significant performance gains. I evaluate selective vectorization on a set of SPEC FP benchmarks. On a realistic VLIW processor model, the approach achieves whole-program speedups of up to 1.35x over existing approaches. For individual loops, it provides speedups of up to 1.75x.by Samuel Larsen.Ph.D