45,591 research outputs found

    Software-based self-testing for a risc processor

    Get PDF
    Software-based self-testing (SBST) has been touted as the effective way to test the processors effectively, with reasonable test coverage, plus the advantages of at-speed testing, and without performance degradation in terms of area and power. Previous work has been done on combining SBST with partial scan logic insertion at Register Transfer Language (RTL) level for a 16-bit RISC processor design. In this project, focus will be done on test coverage improvement without the use of scan logic

    Plug & Test at System Level via Testable TLM Primitives

    Get PDF
    With the evolution of Electronic System Level (ESL) design methodologies, we are experiencing an extensive use of Transaction-Level Modeling (TLM). TLM is a high-level approach to modeling digital systems where details of the communication among modules are separated from the those of the implementation of functional units. This paper represents a first step toward the automatic insertion of testing capabilities at the transaction level by definition of testable TLM primitives. The use of testable TLM primitives should help designers to easily get testable transaction level descriptions implementing what we call a "Plug & Test" design methodology. The proposed approach is intended to work both with hardware and software implementations. In particular, in this paper we will focus on the design of a testable FIFO communication channel to show how designers are given the freedom of trading-off complexity, testability levels, and cos

    Rutger's CAM2000 chip architecture

    Get PDF
    This report describes the architecture and instruction set of the Rutgers CAM2000 memory chip. The CAM2000 combines features of Associative Processing (AP), Content Addressable Memory (CAM), and Dynamic Random Access Memory (DRAM) in a single chip package that is not only DRAM compatible but capable of applying simple massively parallel operations to memory. This document reflects the current status of the CAM2000 architecture and is continually updated to reflect the current state of the architecture and instruction set

    Power Minimisation Techniques for Testing Low Power VLSI Circuits (PhD Dissertation)

    No full text
    Testing low power very large scale integrated (VLSI) circuits has recently become an area of concern due to yield and reliability problems. This dissertation focuses on minimising power dissipation during test application at logic level and register-transfer level (RTL) of abstraction of the VLSI design flow. The first part of this dissertation addresses power minimisation techniques in scan sequential circuits at the logic level of abstraction. A new best primary input change (BPIC) technique based on a novel test application strategy has been proposed. The technique increases the correlation between successive states during shifting in test vectors and shifting out test responses by changing the primary inputs such that the smallest number of transitions is achieved. The new technique is test set dependent and it is applicable to small to medium sized full and partial scan sequential circuits. Since the proposed test application strategy depends only on controlling primary input change time, power is minimised with no penalty in test area, performance, test efficiency, test application time or volume of test data. Furthermore, it is shown that partial scan does not provide only the commonly known benefits such as less test area overhead and test application time, but also less power dissipation during test application when compared to full scan. To achieve power savings in large scan sequential circuits a new test set independent multiple scan chain-based technique which employs a new design for test (DFT) architecture and a novel test application strategy, is presented. The technique has been validated using benchmark examples, and it has been shown that power is minimised with low computational time, low overhead in test area and volume of test data, and with no penalty in test application time, test efficiency, or performance. The second part of this dissertation addresses power minimisation techniques for testing low power VLSI circuits using built-in self-test (BIST) at RTL. First, it is important to overcome the shortcomings associated with traditional BIST methodologies. It is shown how a new BIST methodology for RTL data paths using a novel concept called test compatibility classes (TCC) overcomes high test application time, BIST area overhead, performance degradation, volume of test data, fault-escape probability, and complexity of the testable design space exploration. Second, power minimisation in BIST RTL data paths is achieved by analysing the effect of test synthesis and test scheduling on power dissipation during test application and by employing new power conscious test synthesis and test scheduling algorithms. Third, the new BIST methodology has been validated using benchmark examples. Further, it is shown that when the proposed power conscious test synthesis and test scheduling is combined with novel test compatibility classes simultaneous reduction in test application time and power dissipation is achieved with low overhead in computational time

    MIDAS, prototype Multivariate Interactive Digital Analysis System for large area earth resources surveys. Volume 1: System description

    Get PDF
    A third-generation, fast, low cost, multispectral recognition system (MIDAS) able to keep pace with the large quantity and high rates of data acquisition from large regions with present and projected sensots is described. The program can process a complete ERTS frame in forty seconds and provide a color map of sixteen constituent categories in a few minutes. A principle objective of the MIDAS program is to provide a system well interfaced with the human operator and thus to obtain large overall reductions in turn-around time and significant gains in throughput. The hardware and software generated in the overall program is described. The system contains a midi-computer to control the various high speed processing elements in the data path, a preprocessor to condition data, and a classifier which implements an all digital prototype multivariate Gaussian maximum likelihood or a Bayesian decision algorithm. Sufficient software was developed to perform signature extraction, control the preprocessor, compute classifier coefficients, control the classifier operation, operate the color display and printer, and diagnose operation

    Concurrent Data Structures Linked in Time

    Get PDF
    Arguments about correctness of a concurrent data structure are typically carried out by using the notion of linearizability and specifying the linearization points of the data structure's procedures. Such arguments are often cumbersome as the linearization points' position in time can be dynamic (depend on the interference, run-time values and events from the past, or even future), non-local (appear in procedures other than the one considered), and whose position in the execution trace may only be determined after the considered procedure has already terminated. In this paper we propose a new method, based on a separation-style logic, for reasoning about concurrent objects with such linearization points. We embrace the dynamic nature of linearization points, and encode it as part of the data structure's auxiliary state, so that it can be dynamically modified in place by auxiliary code, as needed when some appropriate run-time event occurs. We name the idea linking-in-time, because it reduces temporal reasoning to spatial reasoning. For example, modifying a temporal position of a linearization point can be modeled similarly to a pointer update in separation logic. Furthermore, the auxiliary state provides a convenient way to concisely express the properties essential for reasoning about clients of such concurrent objects. We illustrate the method by verifying (mechanically in Coq) an intricate optimal snapshot algorithm due to Jayanti, as well as some clients

    Tex2Shape: Detailed Full Human Body Geometry From a Single Image

    No full text
    We present a simple yet effective method to infer detailed full human body shape from only a single photograph. Our model can infer full-body shape including face, hair, and clothing including wrinkles at interactive frame-rates. Results feature details even on parts that are occluded in the input image. Our main idea is to turn shape regression into an aligned image-to-image translation problem. The input to our method is a partial texture map of the visible region obtained from off-the-shelf methods. From a partial texture, we estimate detailed normal and vector displacement maps, which can be applied to a low-resolution smooth body model to add detail and clothing. Despite being trained purely with synthetic data, our model generalizes well to real-world photographs. Numerous results demonstrate the versatility and robustness of our method

    Timing Measurement Platform for Arbitrary Black-Box Circuits Based on Transition Probability

    No full text

    Automatic 3D facial model and texture reconstruction from range scans

    Get PDF
    This paper presents a fully automatic approach to fitting a generic facial model to detailed range scans of human faces to reconstruct 3D facial models and textures with no manual intervention (such as specifying landmarks). A Scaling Iterative Closest Points (SICP) algorithm is introduced to compute the optimal rigid registrations between the generic model and the range scans with different sizes. And then a new template-fitting method, formulated in an optmization framework of minimizing the physically based elastic energy derived from thin shells, faithfully reconstructs the surfaces and the textures from the range scans and yields dense point correspondences across the reconstructed facial models. Finally, we demonstrate a facial expression transfer method to clone facial expressions from the generic model onto the reconstructed facial models by using the deformation transfer technique
    corecore