625 research outputs found
Design methodologies for instruction-set extensible processors
Ph.DDOCTOR OF PHILOSOPH
Automated application-specific instruction set generation
Master'sMASTER OF ENGINEERIN
Instruction-set customization for multi-tasking embedded systems
Ph.DDOCTOR OF PHILOSOPH
Efficient design-space exploration of custom instruction-set extensions
Customization of processors with instruction set extensions (ISEs) is a technique
that improves performance through parallelization with a reasonable area overhead,
in exchange for additional design effort. This thesis presents a collection of
novel techniques that reduce the design effort and cost of generating ISEs by advancing
automation and reconfigurability. In addition, these techniques maximize
the perfomance gained as a function of the additional commited resources.
Including ISEs into a processor design implies development at many levels.
Most prior works on ISEs solve separate stages of the design: identification,
selection, and implementation. However, the interations between these stages
also hold important design trade-offs. In particular, this thesis addresses the lack
of interaction between the hardware implementation stage and the two previous
stages. Interaction with the implementation stage has been mostly limited to
accurately measuring the area and timing requirements of the implementation
of each ISE candidate as a separate hardware module. However, the need to
independently generate a hardware datapath for each ISE limits the flexibility
of the design and the performance gains. Hence, resource sharing is essential in
order to create a customized unit with multi-function capabilities.
Previously proposed resource-sharing techniques aggressively share resources
amongst the ISEs, thus minimizing the area of the solution at any cost. However,
it is shown that aggressively sharing resources leads to large ISE datapath latency.
Thus, this thesis presents an original heuristic that can be parameterized
in order to control the degree of resource sharing amongst a given set of ISEs,
thereby permitting the exploration of the existing implementation trade-offs between
instruction latency and area savings. In addition, this thesis introduces an
innovative predictive model that is able to quickly expose the optimal trade-offs of this design space. Compared to an exhaustive exploration of the design space,
the predictive model is shown to reduce by two orders of magnitude the number
of executions of the resource-sharing algorithm that are required in order to find
the optimal trade-offs.
This thesis presents a technique that is the first one to combine the design
spaces of ISE selection and resource sharing in ISE datapath synthesis, in order
to offer the designer solutions that achieve maximum speedup and maximum
resource utilization using the available area. Optimal trade-offs in the design
space are found by guiding the selection process to favour ISE combinations that
are likely to share resources with low speedup losses. Experimental results show
that this combined approach unveils new trade-offs between speedup and area
that are not identified by previous selection techniques; speedups of up to 238%
over previous selection thecniques were obtained.
Finally, multi-cycle ISEs can be pipelined in order to increase their throughput.
However, it is shown that traditional ISE identification techniques do not
allow this optimization due to control flow overhead. In order to obtain the benefits
of overlapping loop executions, this thesis proposes to carefully insert loop
control flow statements into the ISEs, thus allowing the ISE to control the iterations
of the loop. The proposed ISEs broaden the scope of instruction-level
parallelism and obtain higher speedups compared to traditional ISEs, primarily
through pipelining, the exploitation of spatial parallelism, and reducing the
overhead of control flow statements and branches. A detailed case study of a
real application shows that the proposed method achieves 91% higher speedups
than the state-of-the-art, with an area overhead of less than 8% in hardware
implementation
Accelerated V2X provisioning with Extensible Processor Platform
With the burgeoning Vehicle-to-Everything (V2X) communication, security and privacy concerns are paramount. Such concerns are usually mitigated by combining cryptographic mechanisms with suitable key management architecture. However, cryptographic operations may be quite resource-intensive, placing a considerable burden on the vehicleâs V2X computing unit. To assuage this issue, it is reasonable to use hardware acceleration for common cryptographic primitives, such as block ciphers, digital signature schemes, and key exchange protocols. In this scenario, custom extension instructions can be a plausible option, since they achieve fine-tune hardware acceleration with a low to moderate logic overhead, while also reducing code size. In this article, we apply this method along with dual-data memory banks for the hardware acceleration of the PRESENT block cipher, as well as for the finite field arithmetic employed in cryptographic primitives based on Curve25519 (e.g., EdDSA and X25519). As a result, when compared with a state-of-the-art software-optimized implementation, the performance of PRESENT is improved by a factor of 17 to 34 and code size is reduced by 70%, with only a 4.37% increase in FPGA logic overhead. In addition, we improve the performance of operations over Curve25519 by a factor of ~2.5 when compared to an Assembly implementation on a comparable processor, with moderate logic overhead (namely, 9.1%). Finally, we achieve significant performance gains in the V2X provisioning process by leveraging our hardware-accelerated cryptographic primitive
Recommended from our members
Research and development of accounting system in grid environment
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The Grid has been recognised as the next-generation distributed computing paradigm by seamlessly integrating heterogeneous resources across administrative domains as a single virtual system. There are an increasing number of scientific and business projects that employ Grid computing technologies for large-scale resource sharing and collaborations. Early adoptions of Grid computing technologies have custom middleware implemented to bridge gaps between heterogeneous computing backbones. These custom solutions form the basis to the emerging Open Grid Service Architecture (OGSA), which aims at addressing common concerns of Grid systems by defining a set of interoperable and reusable Grid services. One of common concerns as defined in OGSA is the Grid accounting service. The main objective of the Grid accounting service is to ensure resources to be shared within a Grid environment in an accountable manner by metering and logging accurate resource usage information. This thesis discusses the origins and fundamentals of Grid computing and accounting service in the context of OGSA profile. A prototype was developed and evaluated based on OGSA accounting-related standards enabling sharing accounting data in a multi-Grid environment, the World-wide Large Hadron Collider Grid (WLCG). Based on this prototype and lessons learned, a generic middleware solution was also implemented as a toolkit that eases migration of existing accounting system to be standard compatible.Engineering and Physical Sciences Research Council (EPSRC), Stanford Universit
- âŠ