2,549 research outputs found

    Transition Faults and Transition Path Delay Faults: Test Generation, Path Selection, and Built-In Generation of Functional Broadside Tests

    Get PDF
    As the clock frequency and complexity of digital integrated circuits increase rapidly, delay testing is indispensable to guarantee the correct timing behavior of the circuits. In this dissertation, we describe methods developed for three aspects of delay testing in scan-based circuits: test generation, path selection and built-in test generation. We first describe a deterministic broadside test generation procedure for a path delay fault model named the transition path delay fault model, which captures both large and small delay defects. Under this fault model, a path delay fault is detected only if all the individual transition faults along the path are detected by the same test. To reduce the complexity of test generation, sub-procedures with low complexity are applied before a complete branch-and-bound procedure. Next, we describe a method based on static timing analysis to select critical paths for test generation. Logic conditions that are necessary for detecting a path delay fault are considered to refine the accuracy of static timing analysis, using input necessary assignments. Input necessary assignments are input values that must be assigned to detect a fault. The method calculates more accurate path delays, selects paths that are critical during test application, and identifies undetectable path delay faults. These two methods are applicable to off-line test generation. For large circuits with high complexity and frequency, built-in test generation is a cost-effective method for delay testing. For a circuit that is embedded in a larger design, we developed a method for built-in generation of functional broadside tests to avoid excessive power dissipation during test application and the overtesting of delay faults, taking the functional constraints on the primary input sequences of the circuit into consideration. Functional broadside tests are scan-based two-pattern tests for delay faults that create functional operation conditions during test application. To avoid the potential fault coverage loss due to the exclusive use of functional broadside tests, we also developed an optional DFT method based on state holding to improve fault coverage. High delay fault coverage can be achieved by the developed method for benchmark circuits using simple hardware

    PRITEXT: Processor Reliability Improvement Through Exercise Technique

    Get PDF
    With continuous improvements in CMOS technology, transistor sizes are shrinking aggressively every year. Unfortunately, such deep submicron process technologies are severely degraded by several wearout mechanisms which lead to prolonged operational stress and failure. Negative Bias Temperature Instability (NBTI) is a prominent failure mechanism which degrades the reliability of current semiconductor devices. Improving reliability of processors is necessary for ensuring long operational lifetime which obviates the necessity of mitigating the physical wearout mechanisms. NBTI severely degrades the performance of PMOS transistors in a circuit, when negatively biased, by increasing the threshold voltage leading to critical timing failures over operational lifetime. A lack of activity among the PMOS transistors for long duration leads to a steady increase in threshold voltage Vth. Interestingly, NBTI stress can be recovered by removing the negative bias using appropriate input vectors. Exercising the dormant critical components in the Processor has been proved to reduce the NBTI stress. We use a novel methodology to generate a minimal set of deterministic input vectors which we show to be effective in reducing the NBTI wearout in a superscalar processor core. We then propose and evaluate a new technique PRITEXT, which uses these input vectors in exercise mode to effectively reduce the NBTI stress and improve the operational lifetime of superscalar processors. PRITEXT, which uses Input Vector Control, leads to a 4.5x lifetime improvement of superscalar processor on average with a maximum lifetime improvement of 12.7x

    Scalable Hash Tables

    Get PDF
    The term scalability with regards to this dissertation has two meanings: It means taking the best possible advantage of the provided resources (both computational and memory resources) and it also means scaling data structures in the literal sense, i.e., growing the capacity, by “rescaling” the table. Scaling well to computational resources implies constructing the fastest best per- forming algorithms and data structures. On today’s many-core machines the best performance is immediately associated with parallelism. Since CPU frequencies have stopped growing about 10-15 years ago, parallelism is the only way to take ad- vantage of growing computational resources. But for data structures in general and hash tables in particular performance is not only linked to faster computations. The most execution time is actually spent waiting for memory. Thus optimizing data structures to reduce the amount of memory accesses or to take better advantage of the memory hierarchy especially through predictable access patterns and prefetch- ing is just as important. In terms of scaling the size of hash tables we have identified three domains where scaling hash-based data structures have been lacking previously, i.e., space effi- cient growing, concurrent hash tables, and Approximate Membership Query data structures (AMQ-filter). Throughout this dissertation, we describe the problems in these areas and develop efficient solutions. We highlight three different libraries that we have developed over the course of this dissertation, each containing mul- tiple implementations that have shown throughout our testing to be among the best implementations in their respective domains. In this composition they offer a comprehensive toolbox that can be used to solve many kinds of hashing related problems or to develop individual solutions for further ones. DySECT is a library for space efficient hash tables specifically growing space effi- cient hash tables that scale with their input size. It contains the namesake DySECT data structure in addition to a number of different probing and cuckoo based im- plementations. Growt is a library for highly efficient concurrent hash tables. It contains a very fast base table and a number of extensions to adapt this table to match any purpose. All extension can be combined to create a variety of different interfaces. In our extensive experimental evaluation, each adaptation has shown to be among the best hash tables for their specific purpose. Lpqfilter is a library for concurrent approximate membership query (AMQ) data structures. It contains some original data structures, like the linear probing quotient filter, as well as some novel approaches to dynamically sized quotient filters

    The Effects of Vertebral Variation on the Mechanical Outcomes of Vertebroplasty

    Get PDF
    Osteoporotic vertebral compression fractures are a commonly encountered clinical problem that causes a reduced quality of life for a large proportion of those affected. One of the treatments for this type of fracture is vertebroplasty, where the injection of bone cement into the vertebral body aims to stabilise the vertebra and relieve pain. Despite being a frequently used treatment a number of studies and randomised clinical trials have questioned the efficacy of the procedure. These clinical trials and studies have suggested that the procedure is no more effective than a placebo in terms of pain relief. Finite Element (FE) models allow an investigation into the structural and geometric variation that affect the response to augmentation. However, current specimen specific FE models are limited due to the poor reproduction of cement augmentation behaviour. The aims of this thesis were to develop new methods of modelling the vertebral body in an augmented state, using these models as an input to a statistical shape and appearance model (SSAM). Methods were developed for experimental testing, cement augmentation and modelling through a specimen specific modelling approach to create and solve FE models. These methods were initially used with bovine tail vertebrae and then refined for the use of human lumbar vertebrae. These latter models formed the input set for the creation of a SSAM, where vertebral and augmentation variables were examined. Models of augmentation in human lumbar vertebrae achieved a good agreement with their experimental counterparts through the development of novel modelling techniques. A new SSAM method has been developed for human lumbar vertebrae and applied to evaluate the mechanical performance of vertebroplasty. The tools developed can now be applied to examine wider patient cohorts and other clinical therapies

    Design study for LANDSAT-D attitude control system

    Get PDF
    The gimballed Ku-band antenna system for communication with TDRS was studied. By means of an error analysis it was demonstrated that the antenna cannot be open loop pointed to TDRS by an onboard programmer, but that an autotrack system was required. After some tradeoffs, a two-axis, azimuth-elevation type gimbal configuration was recommended for the antenna. It is shown that gimbal lock only occurs when LANDSAT-D is over water where a temporary loss of the communication link to TDRS is of no consequence. A preliminary gimbal control system design is also presented. A digital computer program was written that computes antenna gimbal angle profiles, assesses percent antenna beam interference with the solar array, and determines whether the spacecraft is over land or water, a lighted earth or a dark earth, and whether the spacecraft is in eclipse

    Enhancement of the Illinois Scan Architecture for Multiple Scan Inputs and Transition Faults

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratorySemiconductor Research Corporation / SRC 99-TJ-717Ope
    corecore