9 research outputs found

    Model-Driven Engineering and Optimizing Compilers: A bridge too far?

    Get PDF
    International audienceA primary goal of Model Driven Engineering (MDE) is to reduce the cost and effort of developing complex software systems using techniques for transforming abstract views of software to concrete implementations. The rich set of tools that have been developed, especially the growing maturity of model transformation technologies, opens the possibility of applying MDE technologies to transformation-based problems in other domains. In this paper, we present our experience with using MDE technologies to build and evolve compiler infrastructures in the optimizing compiler domain.We illustrate, through our two ongoing research compiler projects for C and a functional language, the challenging aspects of optimizing compiler research and show how mature MDE technologies can be used to address them.We also identify some of the pitfalls that arise from unrealistic expectations of what can be accomplished using MDE and discuss how they can lead to unsuccessful and frustrating application of MDE technologies

    The co-design methodologies on click router application system

    Full text link
    Mémoire numérisé par la Direction des bibliothÚques de l'Université de Montréal

    RISPP: A Run-time Adaptive Reconfigurable Embedded Processor

    Get PDF
    This Ph.D. thesis describes a new approach for adaptive processors using a reconfigurable fabric (embedded FPGA) to implement application-specific accelerators. A novel modular Special Instruction composition is presented along with a run-time system that exploits the provided adaptivity. The approach was simulated and prototyped using and FPGA. Comparisons with state-of-the-art appl.-specific and reconf. processors demonstrate significant improvements according the performance and efficiency

    Increasing the efficacy of automated instruction set extension

    Get PDF
    The use of Instruction Set Extension (ISE) in customising embedded processors for a specific application has been studied extensively in recent years. The addition of a set of complex arithmetic instructions to a baseline core has proven to be a cost-effective means of meeting design performance requirements. This thesis proposes and evaluates a reconfigurable ISE implementation called “Configurable Flow Accelerators” (CFAs), a number of refinements to an existing Automated ISE (AISE) algorithm called “ISEGEN”, and the effects of source form on AISE. The CFA is demonstrated repeatedly to be a cost-effective design for ISE implementation. A temporal partitioning algorithm called “staggering” is proposed and demonstrated on average to reduce the area of CFA implementation by 37% for only an 8% reduction in acceleration. This thesis then turns to concerns within the ISEGEN AISE algorithm. A methodology for finding a good static heuristic weighting vector for ISEGEN is proposed and demonstrated. Up to 100% of merit is shown to be lost or gained through the choice of vector. ISEGEN early-termination is introduced and shown to improve the runtime of the algorithm by up to 7.26x, and 5.82x on average. An extension to the ISEGEN heuristic to account for pipelining is proposed and evaluated, increasing acceleration by up to an additional 1.5x. An energyaware heuristic is added to ISEGEN, which reduces the energy used by a CFA implementation of a set of ISEs by an average of 1.6x, up to 3.6x. This result directly contradicts the frequently espoused notion that “bigger is better” in ISE. The last stretch of work in this thesis is concerned with source-level transformation: the effect of changing the representation of the application on the quality of the combined hardwaresoftware solution. A methodology for combined exploration of source transformation and ISE is presented, and demonstrated to improve the acceleration of the result by an average of 35% versus ISE alone. Floating point is demonstrated to perform worse than fixed point, for all design concerns and applications studied here, regardless of ISEs employed

    Reconfigurable Instruction Cell Architecture Reconfiguration and Interconnects

    Get PDF

    Object-Oriented Embedded System Development Based on Synthesis and Reuse of OO-ASIPs

    No full text
    We present an embedded_system design flow, discuss its details, and demonstrate its advantages. We adopt the object_oriented methodology for the system_level model because software dominates hardware in embedded systems and the object_oriented methodology is already established for software design and reuse. As the building_block of system implementation, we synthesise application_specific processors that are reusable, through programming, for several related applications. This addresses the high cost and risk of manufacturing specialised hardware tailored to only a single application. Both the processor and its software are generated from the model of the system by the synthesis and compilation procedures provided. We observe that the key point in object_oriented methodology is the class library, and hence, we implement methods of the class library as the instruction_set of the processor. This allows the processor to be synthesised just once, but, by programming, to be reused several times and specialised to new applications that use the same class library. An important point here is that the processor allows its instructions to be selectively overridden by software routines, this not only allows augmentation of processor capabilities in software, but also enables a structured approach to make software patches to faulty or outdated hardware. A case study illustrates application of the methodology to various applications modelled on top of a common basis class library, and moreover, demonstrates new application_specific opportunities in power management (and area_management for FPGA implementations) resulting from the structure of the processor based on de_activation of unused features

    Object-Oriented Embedded System Development Based on Synthesis and Reuse of OO-ASIPs

    No full text
    Abstract: We present an embedded-system design flow, discuss its details, and demonstrate its advantages. We adopt the object-oriented methodology for the system-level model because software dominates hardware in embedded systems and the objectoriented methodology is already established for software design and reuse. As the building-block of system implementation, we synthesise application-specific processors that are reusable, through programming, for several related applications. This addresses the high cost and risk of manufacturing specialised hardware tailored to only a single application. Both the processor and its software are generated from the model of the system by the synthesis and compilation procedures provided. We observe that the key point in object-oriented methodology is the class library, and hence, we implement methods of the class library as the instruction-set of the processor. This allows the processor to be synthesised just once, but, by programming, to be reused several times and specialised to new applications that use the same class library. An important point here is that the processor allows its instructions to be selectively overridden by software routines; this not only allows augmentation of processor capabilities in software

    Preface

    Get PDF
    corecore