41,278 research outputs found

    High Quality Compact Delay Test Generation

    Get PDF
    Delay testing is used to detect timing defects and ensure that a circuit meets its timing specifications. The growing need for delay testing is a result of the advances in deep submicron (DSM) semiconductor technology and the increase in clock frequency. Small delay defects that previously were benign now produce delay faults, due to reduced timing margins. This research focuses on the development of new test methods for small delay defects, within the limits of affordable test generation cost and pattern count. First, a new dynamic compaction algorithm has been proposed to generate compacted test sets for K longest paths per gate (KLPG) in combinational circuits or scan-based sequential circuits. This algorithm uses a greedy approach to compact paths with non-conflicting necessary assignments together during test generation. Second, to make this dynamic compaction approach practical for industrial use, a recursive learning algorithm has been implemented to identify more necessary assignments for each path, so that the path-to-test-pattern matching using necessary assignments is more accurate. Third, a realistic low cost fault coverage metric targeting both global and local delay faults has been developed. The metric suggests the test strategy of generating a different number of longest paths for each line in the circuit while maintaining high fault coverage. The number of paths and type of test depends on the timing slack of the paths under this metric. Experimental results for ISCAS89 benchmark circuits and three industry circuits show that the pattern count of KLPG can be significantly reduced using the proposed methods. The pattern count is comparable to that of transition fault test, while achieving higher test quality. Finally, the proposed ATPG methodology has been applied to an industrial quad-core microprocessor. FMAX testing has been done on many devices and silicon data has shown the benefit of KLPG test

    Strengthening Model Checking Techniques with Inductive Invariants

    Get PDF
    This paper describes optimized techniques to efficiently compute and reap benefits from inductive invariants within SAT-based model checking. We address sequential circuit verification, and we consider both equivalences and implications between pairs of nodes in the logic networks. First, we present a very efficient dynamic procedure, based on equivalence classes and incremental SAT, specifically oriented to reduce the set of checked invariants. Then, we show how to effectively integrate the computation of inductive invariants within state-of-the-art SAT-based model checking procedures. Experiments (on more than 600 designs) show the robustness of our approach on verification instances on which stand-alone techniques fai

    Trading locality for time: certifiable randomness from low-depth circuits

    Get PDF
    The generation of certifiable randomness is the most fundamental information-theoretic task that meaningfully separates quantum devices from their classical counterparts. We propose a protocol for exponential certified randomness expansion using a single quantum device. The protocol calls for the device to implement a simple quantum circuit of constant depth on a 2D lattice of qubits. The output of the circuit can be verified classically in linear time, and is guaranteed to contain a polynomial number of certified random bits assuming that the device used to generate the output operated using a (classical or quantum) circuit of sub-logarithmic depth. This assumption contrasts with the locality assumption used for randomness certification based on Bell inequality violation or computational assumptions. To demonstrate randomness generation it is sufficient for a device to sample from the ideal output distribution within constant statistical distance. Our procedure is inspired by recent work of Bravyi et al. (Science 2018), who introduced a relational problem that can be solved by a constant-depth quantum circuit, but provably cannot be solved by any classical circuit of sub-logarithmic depth. We develop the discovery of Bravyi et al. into a framework for robust randomness expansion. Our proposal does not rest on any complexity-theoretic conjectures, but relies on the physical assumption that the adversarial device being tested implements a circuit of sub-logarithmic depth. Success on our task can be easily verified in classical linear time. Finally, our task is more noise-tolerant than most other existing proposals that can only tolerate multiplicative error, or require additional conjectures from complexity theory; in contrast, we are able to allow a small constant additive error in total variation distance between the sampled and ideal distributions.Comment: 36 pages, 2 figure

    Synthesis and Optimization of Reversible Circuits - A Survey

    Full text link
    Reversible logic circuits have been historically motivated by theoretical research in low-power electronics as well as practical improvement of bit-manipulation transforms in cryptography and computer graphics. Recently, reversible circuits have attracted interest as components of quantum algorithms, as well as in photonic and nano-computing technologies where some switching devices offer no signal gain. Research in generating reversible logic distinguishes between circuit synthesis, post-synthesis optimization, and technology mapping. In this survey, we review algorithmic paradigms --- search-based, cycle-based, transformation-based, and BDD-based --- as well as specific algorithms for reversible synthesis, both exact and heuristic. We conclude the survey by outlining key open challenges in synthesis of reversible and quantum logic, as well as most common misconceptions.Comment: 34 pages, 15 figures, 2 table

    Immunotronics - novel finite-state-machine architectures with built-in self-test using self-nonself differentiation

    Get PDF
    A novel approach to hardware fault tolerance is demonstrated that takes inspiration from the human immune system as a method of fault detection. The human immune system is a remarkable system of interacting cells and organs that protect the body from invasion and maintains reliable operation even in the presence of invading bacteria or viruses. This paper seeks to address the field of electronic hardware fault tolerance from an immunological perspective with the aim of showing how novel methods based upon the operation of the immune system can both complement and create new approaches to the development of fault detection mechanisms for reliable hardware systems. In particular, it is shown that by use of partial matching, as prevalent in biological systems, high fault coverage can be achieved with the added advantage of reducing memory requirements. The development of a generic finite-state-machine immunization procedure is discussed that allows any system that can be represented in such a manner to be "immunized" against the occurrence of faulty operation. This is demonstrated by the creation of an immunized decade counter that can detect the presence of faults in real tim

    Overview of Hydra: a concurrent language for synchronous digital circuit design

    Get PDF
    Hydra is a computer hardware description language that integrates several kinds of software tool (simulation, netlist generation and timing analysis) within a single circuit specification. The design language is inherently concurrent, and it offers black box abstraction and general design patterns that simplify the design of circuits with regular structure. Hydra specifications are concise, allowing the complete design of a computer system as a digital circuit within a few pages. This paper discusses the motivations behind Hydra, and illustrates the system with a significant portion of the design of a basic RISC processor

    Pedestrian Attribute Recognition: A Survey

    Full text link
    Recognizing pedestrian attributes is an important task in computer vision community due to it plays an important role in video surveillance. Many algorithms has been proposed to handle this task. The goal of this paper is to review existing works using traditional methods or based on deep learning networks. Firstly, we introduce the background of pedestrian attributes recognition (PAR, for short), including the fundamental concepts of pedestrian attributes and corresponding challenges. Secondly, we introduce existing benchmarks, including popular datasets and evaluation criterion. Thirdly, we analyse the concept of multi-task learning and multi-label learning, and also explain the relations between these two learning algorithms and pedestrian attribute recognition. We also review some popular network architectures which have widely applied in the deep learning community. Fourthly, we analyse popular solutions for this task, such as attributes group, part-based, \emph{etc}. Fifthly, we shown some applications which takes pedestrian attributes into consideration and achieve better performance. Finally, we summarized this paper and give several possible research directions for pedestrian attributes recognition. The project page of this paper can be found from the following website: \url{https://sites.google.com/view/ahu-pedestrianattributes/}.Comment: Check our project page for High Resolution version of this survey: https://sites.google.com/view/ahu-pedestrianattributes

    Optimising Simulation Data Structures for the Xeon Phi

    Get PDF
    In this paper, we propose a lock-free architecture to accelerate logic gate circuit simulation using SIMD multi-core machines. We evaluate its performance on different test circuits simulated on the Intel Xeon Phi and 2 other machines. Comparisons are presented of this software/hardware combination with reported performances of GPU and other multi-core simulation platforms. Comparisons are also given between the lock free architecture and a leading commercial simulator running on the same Intel hardware
    • ā€¦
    corecore