1,288 research outputs found

    Analysis of Hardware Descriptions

    Get PDF
    The design process for integrated circuits requires a lot of analysis of circuit descriptions. An important class of analyses determines how easy it will be to determine if a physical component suffers from any manufacturing errors. As circuit complexities grow rapidly, the problem of testing circuits also becomes increasingly difficult. This thesis explores the potential for analysing a recent high level hardware description language called Ruby. In particular, we are interested in performing testability analyses of Ruby circuit descriptions. Ruby is ammenable to algebraic manipulation, so we have sought transformations that improve testability while preserving behaviour. The analysis of Ruby descriptions is performed by adapting a technique called abstract interpretation. This has been used successfully to analyse functional programs. This technique is most applicable where the analysis to be captured operates over structures isomorphic to the structure of the circuit. Many digital systems analysis tools require the circuit description to be given in some special form. This can lead to inconsistency between representations, and involves additional work converting between representations. We propose using the original description medium, in this case Ruby, for performing analyses. A related technique, called non-standard interpretation, is shown to be very useful for capturing many circuit analyses. An implementation of a system that performs non-standard interpretation forms the central part of the work. This allows Ruby descriptions to be analysed using alternative interpretations such test pattern generation and circuit layout interpretations. This system follows a similar approach to Boute's system semantics work and O'Donnell's work on Hydra. However, we have allowed a larger class of interpretations to be captured and offer a richer description language. The implementation presented here is constructed to allow a large degree of code sharing between different analyses. Several analyses have been implemented including simulation, test pattern generation and circuit layout. Non-standard interpretation provides a good framework for implementing these analyses. A general model for making non-standard interpretations is presented. Combining forms that combine two interpretations to produce a new interpretation are also introduced. This allows complex circuit analyses to be decomposed in a modular manner into smaller circuit analyses which can be built independently

    Improved Path Recovery in Pseudo Functional Path Delay Test Using Extended Value Algebra

    Get PDF
    Scan-based delay test achieves high fault coverage due to its improved controllability and observability. This is particularly important for our K Longest Paths Per Gate (KLPG) test approach, which has additional necessary assignments on the paths. At the same time, some percentage of the flip-flops in the circuit will not scan, increasing the difficulty in test generation. In particular, there is no direct control on the outputs of those non-scan cells. All the non-scan cells that cannot be initialized are considered “uncontrollable” in the test generation process. They behave like “black boxes” and, thus, may block a potential path propagation, resulting in path delay test coverage loss. It is common for the timing critical paths in a circuit to pass through nodes influenced by the non-scan cells. In our work, we have extended the traditional Boolean algebra by including the “uncontrolled” state as a legal logic state, so that we can improve path coverage. Many path pruning decisions can be taken much earlier and many of the lost paths due to uncontrollable non-scan cells can be recovered, increasing path coverage and potentially reducing average CPU time per path. We have extended the existing traditional algebra to an 11-value algebra: Zero (stable), One (stable), Unknown, Uncontrollable, Rise, Fall, Zero/Uncontrollable, One/Uncontrollable, Unknown/Uncontrollable, Rise/Uncontrollable, and Fall/Uncontrollable. The logic descriptions for the NOT, AND, NAND, OR, NOR, XOR, XNOR, PI, Buff, Mux, TSL, TSH, TSLI, TSHI, TIE1 and TIE0 cells in the ISCAS89 benchmark circuits have been extended to the 11-value truth table. With 10% non-scan flip-flops, improved path delay fault coverage has been observed in comparison to that with the traditional algebra. The greater the number of long paths we want to test; the greater the path recovery advantage we achieve using our algebra. Along with improved path recovery, we have been able to test a greater number of transition fault sites. In most cases, the average CPU time per path is also lower while using the 11-value algebra. The number of tested paths increased by an average of 1.9x for robust tests, and 2.2x for non-robust tests, for K=5 (five longest rising and five longest falling transition paths through each line in the circuit), using the eleven-value algebra in contrast to the traditional algebra. The transition fault coverage increased by an average of 70%. The improvement increased with higher K values. The CPU time using the extended algebra increased by an average of 20%. So the CPU time per path decreased by an average of 40%. In future work, the extended algebra can achieve better test coverage for memory intensive circuits, circuits with logic black boxes, third party IPs, and analog units

    Reduced-Order Reference Models for Adaptive Control of Space Structures

    Get PDF
    In addition to serving as a brief overview of aspects relevant to reduced-order modeling (in particular balanced-state and modal techniques) as applied to structural finite element models, this work produced tools for visualizing the relationship between the modes of a model and the states of its balanced representation. Specifically, error contour and mean error plots were developed that provide a designer with frequency response information absent from a typical analysis of a balanced model via its Hankel singular values. The plots were then used to analyze the controllability and observability aspects of finite element models of an illustrative system from a modal perspective -- this aided in the identification of computational artifacts in the models and helped predict points at which to halt the truncation of balanced states. Balanced reduced-order reference models of the illustrative system were implemented as part of a direct adaptive control algorithm to observe the effectiveness of the models. It was learned that the truncation point selected by observing the mean error plot produced the most satisfactory results overall -- the model closely approximated the dominant modes of the system and eliminated the computational artifacts. The problem of improving the performance of the system was also considered. The truncated balanced model was recast in modal form so that its damping could be increased, and the settling time decreased by about eighty percent

    Study of Single Event Transient Error Mitigation

    Get PDF
    Single Event Transient (SET) errors in ground-level electronic devices are a growing concern in the radiation hardening field. However, effective SET mitigation technologies which satisfy ground-level demands such as generic, flexible, efficient, and fast, are limited. The classic Triple Modular Redundancy (TMR) method is the most well-known and popular technique in space and nuclear environment. But it leads to more than 200% area and power overheads, which is too costly to implement in ground-level applications. Meanwhile, the coding technique is extensively utilized to inhibit upset errors in storage cells, but the irregularity of combinatorial logics limits its use in SET mitigation. Therefore, SET mitigation techniques suitable for ground-level applications need to be addressed. Aware of the demands for SET mitigation techniques in ground-level applications, this thesis proposes two novel approaches based on the redundant wire and approximate logic techniques. The Redundant Wire is a SET mitigation technique. By selectively adding redundant wire connections, the technique can prohibit targeted transient faults from propagating on the fly. This thesis proposes a set of signature-based evaluation equations to efficiently estimate the protecting effect provided by each redundant wire candidates. Based on the estimated results, a greedy algorithm is used to insert the best candidate repeatedly. Simulation results substantiate that the evaluation equations can achieve up to 98% accuracy on average. Regarding protecting effects, the technique can mask 18.4% of the faults with a 4.3% area, 4.4% power, and 5.4% delay overhead on average. Overall, the quality of protecting results obtained are 2.8 times better than the previous work. Additionally, the impact of synthesis constraints and signature length are discussed. Approximate Logic is a partial TMR technique offering a trade-off between fault coverage and area overheads. The approximate logic consists of an under-approximate logic and an over-approximate logic. The under-approximate logic is a subset of the original min-terms and the over-approximate logic is a subset of the original max-terms. This thesis proposes a new algorithm for generating the two approximate logics. Through the generating process, the algorithm considers the intrinsic failure probabilities of each gate and utilizes a confidence interval estimate equation to minimize required computations. The technique is applied to two fault models, Stuck-at and SET, and the separate results are compared and discussed. The results show that the technique can reduce the error 75% with an area penalty of 46% on some circuits. The delay overheads of this technique are always two additional layers of logic. The two proposed SET mitigation techniques are both applicable to generic combinatorial logics and with high flexibility. The simulation shows promising SET mitigation ability. The proposed mitigation techniques provide designers more choices in developing reliable combinatorial logic in ground-level applications

    Design of wide-area damping control systems for power system low-frequency inter-area oscillations

    Get PDF
    The recently developed robust control theories and wide-area measurementtechnologies make the wide-area real-time feedback control potentially promising. Theobjective of this research is to develop a systematic procedure of designing a centralizeddamping control system for power grid inter-area oscillations by applying wide-areameasurement and robust control techniques while putting emphasis on several practicalconsiderations.The first consideration is the selection of stabilizing signals. Geometric measuresof controllability/observability are used to select the most effective stabilizing signals andcontrol sites. Line power flows and currents are found to be the most effective inputsignals. The second consideration is the effects of time-delay in the communication ofinput/output signals. Time-delays reduce the efficiency of the damping control system. Insome cases, large delays can destabilize the system. Time-delays should be modeled inthe controller design procedure so that the resulting controller can handle a range of timedelays.In this work, time-delays are modeled by PadĂ© Approximations and the delayuncertainty is described by Linear Fractional Transformations (LFT). The thirdconsideration is the controller robustness. The synthesis of the controller is defined as aproblem of mixed H2/H∞ output-feedback control with regional pole placement and isresolved by the Linear Matrix Inequality (LMI) approach. The controller designed byrobust control techniques has satisfactory performance in a wide range of operatingpoints. The fourth consideration is the efficiency of the controller designed by lineartechniques in realistic nonlinear discrete environments. A tuning process and nonlinearsimulations are used to modify the controller parameters to ensure the performance androbustness of the controller designed with linear techniques. The last consideration is theselection of PMU data reporting rates. The performance of controllers designed in the sdomainis tested in digital environments and proper PMU data reporting rates are selectedwith consideration of the effects of time-delay.The design procedure of wide-area damping systems is illustrated by three studysystems. The first study system is a two-area four-machine system. The second one is theNew England 39-bus 10-machine system. The last one is a 29-generator 179-bus studysystem, which is a reduced order model of the Western Electricity Coordinating Council(WECC) system

    Deterministic and Probabilistic Test Generation for Binary and Ternary Quantum Circuits

    Get PDF
    It is believed that quantum computing will begin to have an impact around year 2010. Much work is done on physical realization and synthesis of quantum circuits, but nothing so far on the problem of generating tests and localization of faults for such circuits. Even fault models for quantum circuits have been not formulated yet. We propose an approach to test generation for a wide category of fault models of single and multiple faults. It uses deterministic and probabilistic tests to detect faults. A Fault Table is created that includes probabilistic information. If possible, deterministic tests are first selected, while covering faults with tests, in order to shorten the total length of the test sequence. The method is applicable to both binary and ternary quantum circuits. The system generates test sequences and adaptive trees for fault localization for small binary and ternary quantum circuits
    • 

    corecore