A Complexity Theory for VLSI

Abstract

The established methodologies for studying computational complexity can be applied to the new problems posed by very large-scale integrated (VLSI) circuits. This thesis develops a ''VLSI model of computation'' and derives upper and lower bounds on the silicon area and time required to solve the problems of sorting and discrete Fourier transformation. In particular, the area A and time T taken by any VLSI chip using any algorithm to perform an N-point Fourier transform must satisfy AT2 ≥ c N2 log2 N, for some fixed c > 0. A more general result for both sorting and Fourier transformation is that AT2x = Ω(N1 + x log2x N) for any x in the range 0 x N3/2 log N). The tightness of these bounds is demonstrated by the existence of nearly optimal circuits for both sorting and Fourier transformation. The circuits based on the shuffle-exchange interconnection pattern are fast but large: T = O(log2 N) for Fourier transformation, T = O(log3 N) for sorting; both have area A of at most O(N2 / log1/2 N). The circuits based on the mesh interconnection pattern are slow but small: T = O(N1/2 loglog N), A = O(N log2 N).</p

    Similar works

    Full text

    thumbnail-image

    Available Versions