1,089 research outputs found

    Deductive Fault Simulation Technique for Asynchronous Circuits

    Get PDF
    Fault simulator for acpASC needs to deal with hazards, oscillations and races. The simplest algorithm for simulating faults is the serial fault simulation technique which was successfully used for the acpASC. Faster fault simulation techniques, for example deductive fault simulation, was previously used for the combinational and synchronous sequential circuits only. In this paper a deductive fault simulator for the stuck-at faults of acSI acpASC is presented. An algorithm for the propagation of the fault lists is proposed which can deal with the complex gates of the acpASC. The implemented deductive fault simulator was tested using acSI benchmark circuits. The experimental results show significant reduction of the computation time and negligible increase of the memory requirements in comparison with the serial fault simulation technique

    An Integrated Test Plan for an Advanced Very Large Scale Integrated Circuit Design Group

    Get PDF
    VLSI testing poses a number of problems which includes the selection of test techniques, the determination of acceptable fault coverage levels, and test vector generation. Available device test techniques are examined and compared. Design rules should be employed to assure the design is testable. Logic simulation systems and available test utilities are compared. The various methods of test vector generation are also examined. The selection criteria for test techniques are identified. A table of proposed design rules is included. Testability measurement utilities can be used to statistically predict the test generation effort. Field reject rates and fault coverage are statistically related. Acceptable field reject rates can be achieved with less than full test vector fault coverage. The methods and techniques which are examined form the basis of the recommended integrated test plan. The methods of automatic test vector generation are relatively primitive but are improving

    Understanding and Evaluating Assurance Cases

    Get PDF
    Assurance cases are a method for providing assurance for a system by giving an argument to justify a claim about the system, based on evidence about its design, development, and tested behavior. In comparison with assurance based on guidelines or standards (which essentially specify only the evidence to be produced), the chief novelty in assurance cases is provision of an explicit argument. In principle, this can allow assurance cases to be more finely tuned to the specific circumstances of the system, and more agile than guidelines in adapting to new techniques and applications. The first part of this report (Sections 1-4) provides an introduction to assurance cases. Although this material should be accessible to all those with an interest in these topics, the examples focus on software for airborne systems, traditionally assured using the DO-178C guidelines and its predecessors. A brief survey of some existing assurance cases is provided in Section 5. The second part (Section 6) considers the criteria, methods, and tools that may be used to evaluate whether an assurance case provides sufficient confidence that a particular system or service is fit for its intended use. An assurance case cannot provide unequivocal "proof" for its claim, so much of the discussion focuses on the interpretation of such less-than-definitive arguments, and on methods to counteract confirmation bias and other fallibilities in human reasoning

    論理シミュレーションとハードウェア記述言語に関する研究

    Get PDF
    京都大学0048新制・論文博士工学博士乙第7496号論工博第2471号新制||工||842(附属図書館)UT51-91-E273(主査)教授 矢島 脩三, 教授 津田 孝夫, 教授 田丸 啓吉学位規則第5条第2項該当Kyoto UniversityDFA

    Fault-Tolerant Computing: An Overview

    Get PDF
    Coordinated Science Laboratory was formerly known as Control Systems LaboratoryNASA / NAG-1-613Semiconductor Research Corporation / 90-DP-109Joint Services Electronics Program / N00014-90-J-127

    Analysis of Hardware Descriptions

    Get PDF
    The design process for integrated circuits requires a lot of analysis of circuit descriptions. An important class of analyses determines how easy it will be to determine if a physical component suffers from any manufacturing errors. As circuit complexities grow rapidly, the problem of testing circuits also becomes increasingly difficult. This thesis explores the potential for analysing a recent high level hardware description language called Ruby. In particular, we are interested in performing testability analyses of Ruby circuit descriptions. Ruby is ammenable to algebraic manipulation, so we have sought transformations that improve testability while preserving behaviour. The analysis of Ruby descriptions is performed by adapting a technique called abstract interpretation. This has been used successfully to analyse functional programs. This technique is most applicable where the analysis to be captured operates over structures isomorphic to the structure of the circuit. Many digital systems analysis tools require the circuit description to be given in some special form. This can lead to inconsistency between representations, and involves additional work converting between representations. We propose using the original description medium, in this case Ruby, for performing analyses. A related technique, called non-standard interpretation, is shown to be very useful for capturing many circuit analyses. An implementation of a system that performs non-standard interpretation forms the central part of the work. This allows Ruby descriptions to be analysed using alternative interpretations such test pattern generation and circuit layout interpretations. This system follows a similar approach to Boute's system semantics work and O'Donnell's work on Hydra. However, we have allowed a larger class of interpretations to be captured and offer a richer description language. The implementation presented here is constructed to allow a large degree of code sharing between different analyses. Several analyses have been implemented including simulation, test pattern generation and circuit layout. Non-standard interpretation provides a good framework for implementing these analyses. A general model for making non-standard interpretations is presented. Combining forms that combine two interpretations to produce a new interpretation are also introduced. This allows complex circuit analyses to be decomposed in a modular manner into smaller circuit analyses which can be built independently

    Multi-objective optimisation of safety-critical hierarchical systems

    Get PDF
    Achieving high reliability, particularly in safety critical systems, is an important and often mandatory requirement. At the same time costs should be kept as low as possible. Finding an optimum balance between maximising a system's reliability and minimising its cost is a hard combinatorial problem. As the size and complexity of a system increases, so does the scale of the problem faced by the designers. To address these difficulties, meta-heuristics such as Genetic Algorithms and Tabu Search algorithms have been applied in the past for automatically determining the optimal allocation of redundancies in a system as a mechanism for optimising the reliability and cost characteristics of that system. In all cases, simple reliability block diagrams with restrictive assumptions, such as failure independence and limited 2-state failure modes, were used for evaluating the reliability of the candidate designs produced by the various algorithms.This thesis argues that a departure from this restrictive evaluation model is possible by using a new model-based reliability evaluation technique called Hierachically Performed Hazard Origin and Propagation Studies (HiP-HOPS). HiP-HOPS can overcome the limitations imposed by reliability block diagrams by providing automatic analysis of complex engineering models with multiple failure modes. The thesis demonstrates that, used as the fitness evaluating component of a multi-objective Genetic Algorithm, HiP-HOPS can be used to solve the problem of redundancy allocation effectively and with relative efficiency. Furthermore, the ability of HiP-HOPS to model and automatically analyse complex engineering models, with multiple failure modes, allows the Genetic Algorithm to potentially optimise systems using more flexible strategies, not just series-parallel. The results of this thesis show the feasibility of the approach and point to a number of directions for future work to consider

    Doctor of Philosophy

    Get PDF
    dissertationThe design of integrated circuit (IC) requires an exhaustive verification and a thorough test mechanism to ensure the functionality and robustness of the circuit. This dissertation employs the theory of relative timing that has the advantage of enabling designers to create designs that have significant power and performance over traditional clocked designs. Research has been carried out to enable the relative timing approach to be supported by commercial electronic design automation (EDA) tools. This allows asynchronous and sequential designs to be designed using commercial cad tools. However, two very significant holes in the flow exist: the lack of support for timing verification and manufacturing test. Relative timing (RT) utilizes circuit delay to enforce and measure event sequencing on circuit design. Asynchronous circuits can optimize power-performance product by adjusting the circuit timing. A thorough analysis on the timing characteristic of each and every timing path is required to ensure the robustness and correctness of RT designs. All timing paths have to conform to the circuit timing constraints. This dissertation addresses back-end design robustness by validating full cyclical path timing verification with static timing analysis and implementing design for testability (DFT). Circuit reliability and correctness are necessary aspects for the technology to become commercially ready. In this study, scan-chain, a commercial DFT implementation, is applied to burst-mode RT designs. In addition, a novel testing approach is developed along with scan-chain to over achieve 90% fault coverage on two fault models: stuck-at fault model and delay fault model. This work evaluates the cost of DFT and its coverage trade-off then determines the best implementation. Designs such as a 64-point fast Fourier transform (FFT) design, an I2C design, and a mixed-signal design are built to demonstrate power, area, performance advantages of the relative timing methodology and are used as a platform for developing the backend robustness. Results are verified by performing post-silicon timing validation and test. This work strengthens overall relative timed circuit flow, reliability, and testability

    Multi-level simulation of nano-electronic digital circuits on GPUs

    Get PDF
    Simulation of circuits and faults is an essential part in design and test validation tasks of contemporary nano-electronic digital integrated CMOS circuits. Shrinking technology processes with smaller feature sizes and strict performance and reliability requirements demand not only detailed validation of the functional properties of a design, but also accurate validation of non-functional aspects including the timing behavior. However, due to the rising complexity of the circuit behavior and the steady growth of the designs with respect to the transistor count, timing-accurate simulation of current designs requires a lot of computational effort which can only be handled by proper abstraction and a high degree of parallelization. This work presents a simulation model for scalable and accurate timing simulation of digital circuits on data-parallel graphics processing unit (GPU) accelerators. By providing compact modeling and data-structures as well as through exploiting multiple dimensions of parallelism, the simulation model enables not only fast and timing-accurate simulation at logic level, but also massively-parallel simulation with switch level accuracy. The model facilitates extensions for fast and efficient fault simulation of small delay faults at logic level, as well as first-order parametric and parasitic faults at switch level. With the parallelization on GPUs, detailed and scalable simulation is enabled that is applicable even to multi-million gate designs. This way, comprehensive analyses of realistic timing-related faults in presence of process- and parameter variations are enabled for the first time. Additional simulation efficiency is achieved by merging the presented methods in a unified simulation model, that allows to combine the unique advantages of the different levels of abstraction in a mixed-abstraction multi-level simulation flow to reach even higher speedups. Experimental results show that the implemented parallel approach achieves unprecedented simulation throughput as well as high speedup compared to conventional timing simulators. The underlying model scales for multi-million gate designs and gives detailed insights into the timing behavior of digital CMOS circuits, thereby enabling large-scale applications to aid even highly complex design and test validation tasks

    Hazard elimination using backwards reachability techniques in discrete and hybrid models

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, February 2002.Includes bibliographical references (leaves 173-181).One of the most important steps in hazard analysis is determining whether a particular design can reach a hazardous state and, if it could, how to change the design to ensure that it does not. In most cases, this is done through testing or simulation or even less rigorous processes--none of which provide much confidence for complex systems. Because state spaces for software can be enormous (which is why testing is not an effective way to accomplish the goal), the innovative Hazard Automaton Reduction Algorithm (HARA) involves starting at a hypothetical unsafe state and using backwards reachability techniques to obtain enough information to determine how to design in order to ensure that state cannot be reached. State machine models are very powerful, but also present greater challenges in terms of reachability, including the backwards reachability needed to implement the Hazard Automaton Reduction Algorithm. The key to solving the backwards reachability problem lies in converting the state machine model into a controls state space formulation and creating a state transition matrix. Each successive step backward from the hazardous state then involves only one n by n matrix manipulation. Therefore, only a finite number of matrix manipulations is necessary to determine whether or not a state is reachable from another state, thus providing the same information that could be obtained from a complete backwards reachability graph of the state machine model. Unlike model checking, the computational cost does not increase as greatly with the number of backward states that need to be visited to obtain the information necessary to ensure that the design is safe or to redesign it to be safe. The functionality and optimality of this approach is proved in both discrete and hybrid cases.(cont.) The new approach of the Hazard Automaton Reduction Algorithm combined with backwards reachability controls techniques was demonstrated on a blackbox model of a real aircraft altitude switch. The algorithm is being implemented in a commercial specification language (SpecTRM-RL). SpecTRM-RL is formally extended to include continuous and hybrid models. An analysis of the safety of a medium term conflict detection algorithm (MTCD) for aircraft, that is being developed and tested by Eurocontrol for use in European Air Traffic Control, is performed. Attempts to validate such conflict detection algorithms is currently challenging researchers world wide. Model checking is unsatisfactory in general for this problem because of the lack of a termination guarantee in backwards reachability using model checking. The new state-space controls approach does not encounter this problem.by Natasha Anita Neogi.Ph.D
    corecore