131 research outputs found

    Verification of systolic arrays: a stream functional approach

    Get PDF
    Journal ArticleWe illustrate that the verification of systolic architectures can be carried out using techniques developed in the context of verification of programs. This is achieved by a decomposition of the original problem into separately proving the correctness of the data representation and of the individual processing elements in the systolic architecture. By expressing a processing element as a function on a stream of data we are able to utilize standard proof techniques from programming language theory. This decomposition leads to relatively straightforward proofs of the properties of the systolic architecture. We illustrate the techniques via a substantial example, the proof of the correctness of a linear-time systolic architecture for computing the gcd of polynomials. Although this architecture has been designed a few years ago, a formal proof of correctness has not hitherto appeared in the literature

    TRACIS: transformations on Ada for circuit synthesis a report on the methodology for a silicon compiler

    Get PDF
    technical reportThis report describes in detail, the ongoing design and implementation of a transformation system, for compiling specifications of integrated circuits into silicon. There are many levels in this process, and the area that we focus on produces target specifications of asynchronous and synchronous control units and the associated data paths. This target is compatible with the ASSASSIN system [1] which generates layouts from specifications of control units. The input to our system is an Ada program (restricted to a single Procedure Body) which specifies a certain computation. The Procedure Body is itself assumed to contain no package or task declarations or inatantiations and no Entry call statements. The result of the transformations performed by the system is a program consisting of the original specifications, with the target description appended to it

    Systolic array synthesis by static analysis of program dependencies

    Get PDF
    Journal ArticleWe present a technique for mapping recurrence equations to systolic arrays. While this problem has been studied in fairly great detail, the recurrence equations that are analysed here are a generalization of those studied previously. In a n earlier paper (14] we have showed how systolic arrays can be synthesized from such generalized recurrence equations by a combination of affine transformations and explicit pipelining. This paper extends the results in two directions. Firstly, a multistage pipelining technique is proposed, which permits the synthesis of systolic arrays with irregular data flow. Secondly we develop analysis techniques for the synthesis of systolic arrays whose computation is governed by control signals in a systematic manner which is amenable to mechanization. The full paper also discusses how these techniques can be applied to the mapping problem for more general architectures

    On synthesizing systolic arrays from recurrence equations with linear dependencies

    Get PDF
    Journal ArticleWe present a technique for synthesizing systolic architectures from Recurrence Equations. A class of such equations (Recurrence Equations with Linear Dependencies) is defined and and the problem of mapping such equations onto a two dimensional architecture is studied. We show that such a mapping is provided by means of a linear allocation and timing function. An important result is that under such a mapping the dependencies remain linear. After obtaining a two-dimensional architecture by applying such a mapping, a systolic array can be derived if t h e communication can be spatially and temporally localized. We show that a simple test consisting of finding the zeroes of a matrix is sufficient to determine whether this localization can be achieved by pipelining and give a construction that generates the array when such a pipelining is possible. The technique is illustrated by automatically deriving a well known systolic array for factoring a band matrix into lower and upper triangular factors

    BPPart: RNA-RNA Interaction Partition Function in the Absence of Entropy

    Get PDF

    Allocating memory arrays for polyhedra

    Get PDF
    We have been investigating problems which arise in compiling single assignment labguages (in which memory is not explicitly allocated) into parallel code. Like standard parallelizing compilers, different index space transformations are performed on variables declared over convex polyhedral regions. Polyhedra can be transformed in such a ways as to reduce the volume of the bounding box which we use to reduce the amount of memory allocated to a variable. Allocation of memory to variables which are defined over finite convex polyhedral regions requires a tradeoff in the complexity of the memory addressing function versus the amount of memory used. We present a tradeoff in which the memory address function is limited to an affine function of the indices (thus memory is allocated to a rectangular parallelepiped region). Given this constraint, we seed a unimodular transformation which minimizes the volume of the bounding box of the polyedron. This is a non-linear programming problem. We present a method in which the volume of the bounding box is minimized one dimension at a time by a succession of skewing transformations. Each one of these is a linear programming problem
    • …
    corecore