231 research outputs found

    Reliability models for dataflow computer systems

    Get PDF
    The demands for concurrent operation within a computer system and the representation of parallelism in programming languages have yielded a new form of program representation known as data flow (DENN 74, DENN 75, TREL 82a). A new model based on data flow principles for parallel computations and parallel computer systems is presented. Necessary conditions for liveness and deadlock freeness in data flow graphs are derived. The data flow graph is used as a model to represent asynchronous concurrent computer architectures including data flow computers

    COMPARISON OF INSTRUCTION SCHEDULING AND REGISTER ALLOCATION FOR MIPS AND HPL-PD ARCHITECTURE FOR EXPLOITATION OF INSTRUCTION LEVEL PARALLELISM

    Get PDF
    The integrated approaches for instruction scheduling and register allocation have been promising area of research for code generation and compiler optimization. In this paper we have proposed an integrated algorithm for instruction scheduling and register allocation and implemented it for compiler optimization in machine description in trimaran infrastructure for exploitation of Instruction level parallelism. Our implementation in trimaran infrastructure shows that our scheduler reduces the number of active live ranges dealt with linear scan allocator. As a result only few spills were needed and the quality of the code generated was improved. For our experiments we used 20 benchmarks available with trimaran infrastructure for HPL-PD architecture. We compare some of these results with results obtained by Haijing Tang et al (2013) performed by LLVM compiler on MIPS architecture. For our experimental work we added machine description (MDES) targeted to HL-PD architecture. The implemented algorithm is based on subgraph isomorphism. The input program is represented in the form of directed acyclic graph (DAG). The vertices of the DAG represent the instructions, input and output operands of the program, while the edges represent dependencies among the instructions

    Survey on Instruction Selection: An Extensive and Modern Literature Review

    Full text link
    Instruction selection is one of three optimisation problems involved in the code generator backend of a compiler. The instruction selector is responsible of transforming an input program from its target-independent representation into a target-specific form by making best use of the available machine instructions. Hence instruction selection is a crucial part of efficient code generation. Despite on-going research since the late 1960s, the last, comprehensive survey on the field was written more than 30 years ago. As new approaches and techniques have appeared since its publication, this brings forth a need for a new, up-to-date review of the current body of literature. This report addresses that need by performing an extensive review and categorisation of existing research. The report therefore supersedes and extends the previous surveys, and also attempts to identify where future research should be directed.Comment: Major changes: - Merged simulation chapter with macro expansion chapter - Addressed misunderstandings of several approaches - Completely rewrote many parts of the chapters; strengthened the discussion of many approaches - Revised the drawing of all trees and graphs to put the root at the top instead of at the bottom - Added appendix for listing the approaches in a table See doc for more inf

    Customizing the Computation Capabilities of Microprocessors.

    Full text link
    Designers of microprocessor-based systems must constantly improve performance and increase computational efficiency in their designs to create value. To this end, it is increasingly common to see computation accelerators in general-purpose processor designs. Computation accelerators collapse portions of an application's dataflow graph, reducing the critical path of computations, easing the burden on processor resources, and reducing energy consumption in systems. There are many problems associated with adding accelerators to microprocessors, though. Design of accelerators, architectural integration, and software support all present major challenges. This dissertation tackles these challenges in the context of accelerators targeting acyclic and cyclic patterns of computation. First, a technique to identify critical computation subgraphs within an application set is presented. This technique is hardware-cognizant and effectively generates a set of instruction set extensions given a domain of target applications. Next, several general-purpose accelerator structures are quantitatively designed using critical subgraph analysis for a broad application set. The next challenge is architectural integration of accelerators. Traditionally, software invokes accelerators by statically encoding new instructions into the application binary. This is incredibly costly, though, requiring many portions of hardware and software to be redesigned. This dissertation develops strategies to utilize accelerators, without changing the instruction set. In the proposed approach, the microarchitecture translates applications at run-time, replacing computation subgraphs with microcode to utilize accelerators. We explore the tradeoffs in performing difficult aspects of the translation at compile-time, while retaining run-time replacement. This culminates in a simple microarchitectural interface that supports a plug-and-play model for integrating accelerators into a pre-designed microprocessor. Software support is the last challenge in dealing with computation accelerators. The primary issue is difficulty in generating high-quality code utilizing accelerators. Hand-written assembly code is standard in industry, and if compiler support does exist, simple greedy algorithms are common. In this work, we investigate more thorough techniques for compiling for computation accelerators. Where greedy heuristics only explore one possible solution, the techniques in this dissertation explore the entire design space, when possible. Intelligent pruning methods ensure that compilation is both tractable and scalable.Ph.D.Computer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/57633/2/ntclark_1.pd

    Suppressing quantum circuit errors due to system variability

    Full text link
    We present a post-compilation quantum circuit optimization technique that takes into account the variability in error rates that is inherent across present day noisy quantum computing platforms. This method consists of computing isomorphic subgraphs to input circuits and scoring each using heuristic cost functions derived from system calibration data. Using standard algorithmic test circuits we show that it is possible to recover on average nearly 40% of missing fidelity using better qubit selection via efficient to compute cost functions. We demonstrate additional performance gains by considering qubit placement over multiple quantum processors. The overhead from these tools is minimal with respect to other compilation steps such as qubit routing as the number of qubits increases. As such, our method can be used to find qubit mappings for problems at the scale of quantum advantage and beyond.Comment: 8 pages, 6 figure

    CHIPS: Custom Hardware Instruction Processor Synthesis

    Full text link

    Techniques for Crafting Customizable MPSoCS

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH
    corecore