12 research outputs found

    Towards a quantitative model for hardware/software partitioning

    No full text
    Heterogeneous System Development needs Hardware/Software Partitioning performed early on in the development process. In order to do this early on predictions of hardware resource usage and delay are necessary. In this thesis a Quantitative Model is presented that can make early predictions to support the partitioning process. The model is based on Software Complexity Metrics, which capture important aspects of functions like control intensity, data intensity, code size, etc. In order to remedy the interdependence of the software metrics a Principal Component Analysis performed. The hardware characteristics were determined by automatically generating VHDL from C using the DWARV C-to-VHDL compiler. Using the results from the principal component analysis, the quantitative model was generated using linear regression. The error of the model differs per hardware characteristic. We show that for flip-flops the mean error for the predictions is 69%. In conclusion, our quantitative model can make fast and sufficiently accurate area predictions to support Hardware/Software Partitioning. In the future, the model can be extended by introducing extra software metrics, using more advanced modeling techniques, and using a larger collection of functions an

    Quantitative Prediction for Early Design Space Exploration in Delft WorkBench: An Outlook

    No full text
    Abstract — In this paper, we discuss Quipu our multidimensional quantitative prediction model for hardwaresoftware partitioning. The proposed model is based on linear regression between software metrics determined on a dataset of 127 kernels and measures from their corresponding hardware designs. These software metrics capture the complexity of the C language description. The hardware designs are determined using the DWARV C-to-VHDL translator [1]. Currently, Quipu exhibits a relatively large error compared to lower level approaches, however the Quipu model can make fast and early predictions and is applicable to a wide variety of applications. For the moment, we have only considered prediction of area measures, like the number of slices or flip-flops. The main steps to improve Quipu are the following: 1) re-evaluation of the selected software metrics. 2) use of a lower level representation of the C code. 3) extension of the set of kernels. 4) extension of the modeled hardware parameters. In other words, a consolidated model can provide more, and more accurate information. We conclude that fast and early prediction of hardware characteristics is important, but our approach was not accurate enough in the past. While a somewhat larger error is acceptable in the early stages of design, we need to improve our Quipu model. Furthermore, for Quipu to be applicable, it must predict additional hardware measures for a wider range of application domains. Keywords—Reconfigurable architectures, Modeling, Estimation, Statistics, Software metrics, System analysis and design I

    Automated hdl generation: Comparative evaluation

    No full text
    Abstract — Reconfigurable computing (RC) systems, coupling general purpose processor with reconfigurable components, offer a lot of advantages. Nevertheless, currently a designer needs both in-depth software and hardware design knowledge to develop applications for such platforms. The automated hardware generation addresses this problem. However, the success of such tools remains marginal. This paper discusses the reasons for the lack of success. It presents a quantitative and qualitative comparison of three hardware generators using the following criteria: quality of the hardware model, the supported HLL constructs, and the level of automation. I. INTRODUCTION AND PROBLEM DESCRIPTION Reconfigurable computing (RC) systems, coupling general purpose processors (GPPs) with reconfigurable components, offer a lot of advantages combining the flexibility of the software execution with the computational speed of the application-specific hardware

    DRuiD: Designing reconfigurable architectures with decision-making support

    No full text
    Application development for heterogeneous platforms requires to code and map functionalities on a set of different computing elements. As a consequence, the development process needs a clear understanding of both, application requirements and heterogeneous computing technologies. To support the development process, we propose a framework called DRuiD capable of learning application characteristics that make them suitable for certain computing elements. The framework is composed of an expert system that supports the designer in the mapping decision and gives hints on possible code modifications to be applied to make the functionality more suitable for a computing element. The experimental results are tailored for a heterogeneous and reconfigurable platform (the Xilinx-ml510) including two computational elements, i.e. a Virtex5 FPGA and a PowerPC. The expert system identifies 88.9% of the times what are the functionalities that are accelerated efficiently by using the FPGA, without requiring the kernel porting. Additionally, we present two case studies demonstrating the potentialities of the framework to give hints on high level code modifications for an efficient kernel mapping on the FPGA

    Profiling, Compilation, and HDL Generation within the hArtes Project

    No full text
    The hArtes project addresses optimal and rapid design of embedded systems from high-level descriptions, targeting a combination of embedded processors, digital signal processing, and reconfigurable hardware. In this paper, we present three tools from the hArtes toolchain, namely profiling, compilation, and HDL generation tools, that facilitate the HW/SW partitioning, co-design, co-verification, and co-execution of demanding embedded applications. The described tools are provided by the DelftWorkBench framework. 1
    corecore