3,471 research outputs found

    Improved Path Recovery in Pseudo Functional Path Delay Test Using Extended Value Algebra

    Get PDF
    Scan-based delay test achieves high fault coverage due to its improved controllability and observability. This is particularly important for our K Longest Paths Per Gate (KLPG) test approach, which has additional necessary assignments on the paths. At the same time, some percentage of the flip-flops in the circuit will not scan, increasing the difficulty in test generation. In particular, there is no direct control on the outputs of those non-scan cells. All the non-scan cells that cannot be initialized are considered “uncontrollable” in the test generation process. They behave like “black boxes” and, thus, may block a potential path propagation, resulting in path delay test coverage loss. It is common for the timing critical paths in a circuit to pass through nodes influenced by the non-scan cells. In our work, we have extended the traditional Boolean algebra by including the “uncontrolled” state as a legal logic state, so that we can improve path coverage. Many path pruning decisions can be taken much earlier and many of the lost paths due to uncontrollable non-scan cells can be recovered, increasing path coverage and potentially reducing average CPU time per path. We have extended the existing traditional algebra to an 11-value algebra: Zero (stable), One (stable), Unknown, Uncontrollable, Rise, Fall, Zero/Uncontrollable, One/Uncontrollable, Unknown/Uncontrollable, Rise/Uncontrollable, and Fall/Uncontrollable. The logic descriptions for the NOT, AND, NAND, OR, NOR, XOR, XNOR, PI, Buff, Mux, TSL, TSH, TSLI, TSHI, TIE1 and TIE0 cells in the ISCAS89 benchmark circuits have been extended to the 11-value truth table. With 10% non-scan flip-flops, improved path delay fault coverage has been observed in comparison to that with the traditional algebra. The greater the number of long paths we want to test; the greater the path recovery advantage we achieve using our algebra. Along with improved path recovery, we have been able to test a greater number of transition fault sites. In most cases, the average CPU time per path is also lower while using the 11-value algebra. The number of tested paths increased by an average of 1.9x for robust tests, and 2.2x for non-robust tests, for K=5 (five longest rising and five longest falling transition paths through each line in the circuit), using the eleven-value algebra in contrast to the traditional algebra. The transition fault coverage increased by an average of 70%. The improvement increased with higher K values. The CPU time using the extended algebra increased by an average of 20%. So the CPU time per path decreased by an average of 40%. In future work, the extended algebra can achieve better test coverage for memory intensive circuits, circuits with logic black boxes, third party IPs, and analog units

    Real-time and fault tolerance in distributed control software

    Get PDF
    Closed loop control systems typically contain multitude of spatially distributed sensors and actuators operated simultaneously. So those systems are parallel and distributed in their essence. But mapping this parallelism onto the given distributed hardware architecture, brings in some additional requirements: safe multithreading, optimal process allocation, real-time scheduling of bus and network resources. Nowadays, fault tolerance methods and fast even online reconfiguration are becoming increasingly important. All those often conflicting requirements, make design and implementation of real-time distributed control systems an extremely difficult task, that requires substantial knowledge in several areas of control and computer science. Although many design methods have been proposed so far, none of them had succeeded to cover all important aspects of the problem at hand. [1] Continuous increase of production in embedded market, makes a simple and natural design methodology for real-time systems needed more then ever

    Power-efficient high-speed interface circuit techniques

    Get PDF
    Inter- and intra-chip connections have become the new challenge to enable the scaling of computing systems, ranging from mobile devices to high-end servers. Demand for aggregate I/O bandwidth has been driven by applications including high-speed ethernet, backplane micro-servers, memory, graphics, chip-to-chip and network onchip. I/O circuitry is becoming the major power consumer in SoC processors and memories as the increasing bandwidth demands larger per-pin data rate or larger I/O pin count per component. The aggregate I/O bandwidth has approximately doubled every three to four years across a diverse range of standards in different applications. However, in order to keep pace with these standards enabled in part by process-technology scaling, we will require more than just device scaling in the near future. New energy-efficient circuit techniques must be proposed to enable the next generations of handheld and high-performance computers, given the thermal and system-power limits they start facing. ^ In this work, we are proposing circuit architectures that improve energy efficiency without decreasing speed performance for the most power hungry circuits in high speed interfaces. By the introduction of a new kind of logic operators in CMOS, called implication operators, we implemented a new family of high-speed frequency dividers/prescalers with reduced footprint and power consumption. New techniques and circuits for clock distribution, for pre-emphasis and for driver at the transmitter side of the I/O circuitry have been proposed and implemented. At the receiver side, new DFE architecture and CDR have been proposed and have been proven experimentally

    An Experimental Investigation of TCP Performance in High Bandwidth-Delay Product Paths.

    Get PDF
    The performance of the Internet is determined not only by the network and hardware technologies that underlie it, but also by the software protocols that govern its use. In particular, the TCP transport protocol is responsible for carrying the great majority of traffic in the current internet, including web traffic, email, file transfers, music and video downloads. TCP provides two main functions. First, it provides functionality to detect and retransmit packets lost during a transfer thereby providing a reliable transport service to higher layer applications. Second, it enforces congestion control. That is, it seeks to match the rate at which packets are injected into the network to the available network capacity. A particular aim here is to avoid so-called congestion collapse, prevalent in the late 1980s prior to the inclusion of congestion control functionality in TCP. Over the last decade or so, the link speeds within networks have increased by several orders of magnitude. While the TCP congestion control algorithm has proved remarkably successful, it is now recognised that its performance is poor on paths with high bandwidth-delay product, e.g. see [13, 8, 14, 26, 12] and references therein. With the increasing prevalence of high speed links, this issue is becoming of widespread concern. This is reflected, for example, in the fact that the Linux operating system now employs an experimental algorithm called BIC-TCP[26] while Microsoft are actively studying new algorithms such as Compound-TCP[25]. While a number of proposals have been made to modify the TCP congestion control algorithm, all of these are still experimental and pending evaluation as they change the congestion control in new and significant ways and their effects on the network are not well understood. In fact, the basic properties of networks employing these algorithms may be very different to networks of standard TCP flows. The aim of this thesis is to address, in part, this basic observation

    An Experimental Investigation of TCP Performance in High Bandwidth-Delay Product Paths.

    Get PDF
    The performance of the Internet is determined not only by the network and hardware technologies that underlie it, but also by the software protocols that govern its use. In particular, the TCP transport protocol is responsible for carrying the great majority of traffic in the current internet, including web traffic, email, file transfers, music and video downloads. TCP provides two main functions. First, it provides functionality to detect and retransmit packets lost during a transfer thereby providing a reliable transport service to higher layer applications. Second, it enforces congestion control. That is, it seeks to match the rate at which packets are injected into the network to the available network capacity. A particular aim here is to avoid so-called congestion collapse, prevalent in the late 1980s prior to the inclusion of congestion control functionality in TCP. Over the last decade or so, the link speeds within networks have increased by several orders of magnitude. While the TCP congestion control algorithm has proved remarkably successful, it is now recognised that its performance is poor on paths with high bandwidth-delay product, e.g. see [13, 8, 14, 26, 12] and references therein. With the increasing prevalence of high speed links, this issue is becoming of widespread concern. This is reflected, for example, in the fact that the Linux operating system now employs an experimental algorithm called BIC-TCP[26] while Microsoft are actively studying new algorithms such as Compound-TCP[25]. While a number of proposals have been made to modify the TCP congestion control algorithm, all of these are still experimental and pending evaluation as they change the congestion control in new and significant ways and their effects on the network are not well understood. In fact, the basic properties of networks employing these algorithms may be very different to networks of standard TCP flows. The aim of this thesis is to address, in part, this basic observation

    The 1992 4th NASA SERC Symposium on VLSI Design

    Get PDF
    Papers from the fourth annual NASA Symposium on VLSI Design, co-sponsored by the IEEE, are presented. Each year this symposium is organized by the NASA Space Engineering Research Center (SERC) at the University of Idaho and is held in conjunction with a quarterly meeting of the NASA Data System Technology Working Group (DSTWG). One task of the DSTWG is to develop new electronic technologies that will meet next generation electronic data system needs. The symposium provides insights into developments in VLSI and digital systems which can be used to increase data systems performance. The NASA SERC is proud to offer, at its fourth symposium on VLSI design, presentations by an outstanding set of individuals from national laboratories, the electronics industry, and universities. These speakers share insights into next generation advances that will serve as a basis for future VLSI design
    • …
    corecore