2 research outputs found

    High-Speed Clocking Deskewing Architecture

    Get PDF
    As the CMOS technology continues to scale into the deep sub-micron regime, the demand for higher frequencies and higher levels of integration poses a significant challenge for the clock generation and distribution design of microprocessors. Hence, skew optimization schemes are necessary to limit clock inaccuracies to a small fraction of the clock period. In this thesis, a crude deskew buffer (CDB) is designed to facilitate an adaptive deskewing scheme that reduces the clock skew in an ASIC clock network under manufacturing process, supply voltage, and temperature (PVT)variations. The crude deskew buffer adopts a DLL structure and functions on a 1GHz nominal clock frequency with an operating frequency range of 800MHz to 1.2GHz. An approximate 91.6ps phase resolution is achieved for all simulation conditions including various process corners and temperature variation. When the crude deskew buffer is applied to seven ASIC clock networks with each under various PVT variations, a maximum of 67.1% reduction in absolute maximum clock skew has been achieved. Furthermore, the maximum phase difference between all the clock signals in the seven networks have been reduced from 957.1ps to 311.9ps, a reduction of 67.4%. Overall, the CDB serves two important purposes in the proposed deskewing methodology: reducing the absolute maximum clock skew and synchronizes all the clock signals to a certain limit for the fine deskewing scheme. By generating various clock phases, the CDB can also be potentially useful in high speed debugging and testing where the clock duty cycle can be adjusted accordingly. Various positive and negative duty cycle values can be generated based on the phase resolution and the number of clock phases being “hot swapped”. For a 500ps duty cycle, the following values can be achieved for both the positive and negative duty cycle: 224ps, 316ps, 408ps, 592ps, 684ps, and 776ps

    LOW OVERHEAD DELAY TESTING OF ASICS

    No full text
    Delay testing has become increasingly essential as chip geometries shrink [1,2,3]. Low overhead or cost effective delay test methodology is successful when it results in a minimal number of effective tests and eases the demands on an already burdened IC design and test staff. This paper describes one successful method in use by IBM ASICs that resulted in a slight total test pattern increase, generally ranging between 10 and 90%. Example ICs showed a pattern increase of as little as 14 % from the stuck-at fault baseline with a transition fault coverage of 89%. In an ASIC business, a large number of ICs are processed, which does not allow for the personnel to understand how to test each individual IC design in detail. Instead, design automation software that is timing and testability aware ensures effective and efficient tests. The resultant tests detect random spot timing delay defects. These types of defects are time zero related failures and not reliability wearout mechanisms. 1
    corecore