6,273 research outputs found

    Scan Test Cost and Power Reduction Through Systematic Scan Reconfiguration

    Get PDF
    This paper presents segmented addressable scan (SAS), a test architecture that addresses test data volume, test application time, test power consumption, and tester channel requirements using a hardware overhead of a few gates per scan chain. Using SAS, this paper also presents systematic scan reconfiguration, a test data compression algorithm that is applied to achieve 10times to 40 times compression ratios without requiring any information from the automatic-test-pattern-generation tool about the unspecified bits. The architecture and the algorithm were applied to both single stuck as well as transition fault test set

    Scan Test Cost and Power Reduction Through Systematic Scan Reconfiguration

    Get PDF
    This paper presents segmented addressable scan (SAS), a test architecture that addresses test data volume, test application time, test power consumption, and tester channel requirements using a hardware overhead of a few gates per scan chain. Using SAS, this paper also presents systematic scan reconfiguration, a test data compression algorithm that is applied to achieve 10times to 40 times compression ratios without requiring any information from the automatic-test-pattern-generation tool about the unspecified bits. The architecture and the algorithm were applied to both single stuck as well as transition fault test set

    Restoring Reliability in Fault Tolerant Reconfigurable Systems

    Get PDF
    The new generations of SRAM-based FPGAdevices, built on nanometer technology, are thepreferred choice for the implementation ofreconfigurable computing platforms. However,smaller technological scales increase theirvulnerability to manufacturing imperfections andhence to the occurrence of electromigration.Moreover, the large internal RAM (for configurationpurposes or as embedded memory blocks) makesthem more prone to soft errors.The incorporation of self-reconfigurationcapabilities in recent FPGAs, allied to the use of softand hard microprocessor cores, facilitates the offsetof these vulnerabilities by enabling the developmentof self-restoring fault tolerant reconfigurablesystems. In the methodology presented in this paper,the embedded microprocessor is also responsible forthe implementation of online self-test-and-repairstrategies, based on modular redundancy and onself-reconfiguration. The detection of faults, causedby soft or hard errors, may be followed by repairingactions, depending on the fault type. This approachleads to smoother system degradation, extending itslifetime and improving its reliability

    Science and Applications Space Platform (SASP) End-to-End Data System Study

    Get PDF
    The capability of present technology and the Tracking and Data Relay Satellite System (TDRSS) to accommodate Science and Applications Space Platforms (SASP) payload user's requirements, maximum service to the user through optimization of the SASP Onboard Command and Data Management System, and the ability and availability of new technology to accommodate the evolution of SASP payloads were assessed. Key technology items identified to accommodate payloads on a SASP were onboard storage devices, multiplexers, and onboard data processors. The primary driver is the limited access to TDRSS for single access channels due to sharing with all the low Earth orbit spacecraft plus shuttle. Advantages of onboard data processing include long term storage of processed data until TRDSS is accessible, thus reducing the loss of data, eliminating large data processing tasks at the ground stations, and providing a more timely access to the data

    Application of advanced technology to space automation

    Get PDF
    Automated operations in space provide the key to optimized mission design and data acquisition at minimum cost for the future. The results of this study strongly accentuate this statement and should provide further incentive for immediate development of specific automtion technology as defined herein. Essential automation technology requirements were identified for future programs. The study was undertaken to address the future role of automation in the space program, the potential benefits to be derived, and the technology efforts that should be directed toward obtaining these benefits

    Deployable antenna demonstration project

    Get PDF
    Test program options are described for large lightweight deployable antennas for space communications, radar and radiometry systems

    Hardware for digitally controlled scanned probe microscopes

    Get PDF
    The design and implementation of a flexible and modular digital control and data acquisition system for scanned probe microscopes (SPMs) is presented. The measured performance of the system shows it to be capable of 14-bit data acquisition at a 100-kHz rate and a full 18-bit output resolution resulting in less than 0.02-Å rms position noise while maintaining a scan range in excess of 1 µm in both the X and Y dimensions. This level of performance achieves the goal of making the noise of the microscope control system an insignificant factor for most experiments. The adaptation of the system to various types of SPM experiments is discussed. Advances in audio electronics and digital signal processors have made the construction of such high performance systems possible at low cost

    Network-on-Chip

    Get PDF
    Addresses the Challenges Associated with System-on-Chip Integration Network-on-Chip: The Next Generation of System-on-Chip Integration examines the current issues restricting chip-on-chip communication efficiency, and explores Network-on-chip (NoC), a promising alternative that equips designers with the capability to produce a scalable, reusable, and high-performance communication backbone by allowing for the integration of a large number of cores on a single system-on-chip (SoC). This book provides a basic overview of topics associated with NoC-based design: communication infrastructure design, communication methodology, evaluation framework, and mapping of applications onto NoC. It details the design and evaluation of different proposed NoC structures, low-power techniques, signal integrity and reliability issues, application mapping, testing, and future trends. Utilizing examples of chips that have been implemented in industry and academia, this text presents the full architectural design of components verified through implementation in industrial CAD tools. It describes NoC research and developments, incorporates theoretical proofs strengthening the analysis procedures, and includes algorithms used in NoC design and synthesis. In addition, it considers other upcoming NoC issues, such as low-power NoC design, signal integrity issues, NoC testing, reconfiguration, synthesis, and 3-D NoC design. This text comprises 12 chapters and covers: The evolution of NoC from SoC—its research and developmental challenges NoC protocols, elaborating flow control, available network topologies, routing mechanisms, fault tolerance, quality-of-service support, and the design of network interfaces The router design strategies followed in NoCs The evaluation mechanism of NoC architectures The application mapping strategies followed in NoCs Low-power design techniques specifically followed in NoCs The signal integrity and reliability issues of NoC The details of NoC testing strategies reported so far The problem of synthesizing application-specific NoCs Reconfigurable NoC design issues Direction of future research and development in the field of NoC Network-on-Chip: The Next Generation of System-on-Chip Integration covers the basic topics, technology, and future trends relevant to NoC-based design, and can be used by engineers, students, and researchers and other industry professionals interested in computer architecture, embedded systems, and parallel/distributed systems

    Testability and redundancy techniques for improved yield and reliability of CMOS VLSI circuits

    Get PDF
    The research presented in this thesis is concerned with the design of fault-tolerant integrated circuits as a contribution to the design of fault-tolerant systems. The economical manufacture of very large area ICs will necessitate the incorporation of fault-tolerance features which are routinely employed in current high density dynamic random access memories. Furthermore, the growing use of ICs in safety-critical applications and/or hostile environments in addition to the prospect of single-chip systems will mandate the use of fault-tolerance for improved reliability. A fault-tolerant IC must be able to detect and correct all possible faults that may affect its operation. The ability of a chip to detect its own faults is not only necessary for fault-tolerance, but it is also regarded as the ultimate solution to the problem of testing. Off-line periodic testing is selected for this research because it achieves better coverage of physical faults and it requires less extra hardware than on-line error detection techniques. Tests for CMOS stuck-open faults are shown to detect all other faults. Simple test sequence generation procedures for the detection of all faults are derived. The test sequences generated by these procedures produce a trivial output, thereby, greatly simplifying the task of test response analysis. A further advantage of the proposed test generation procedures is that they do not require the enumeration of faults. The implementation of built-in self-test is considered and it is shown that the hardware overhead is comparable to that associated with pseudo-random and pseudo-exhaustive techniques while achieving a much higher fault coverage through-the use of the proposed test generation procedures. The consideration of the problem of testing the test circuitry led to the conclusion that complete test coverage may be achieved if separate chips cooperate in testing each other's untested parts. An alternative approach towards complete test coverage would be to design the test circuitry so that it is as distributed as possible and so that it is tested as it performs its function. Fault correction relies on the provision of spare units and a means of reconfiguring the circuit so that the faulty units are discarded. This raises the question of what is the optimum size of a unit? A mathematical model, linking yield and reliability is therefore developed to answer such a question and also to study the effects of such parameters as the amount of redundancy, the size of the additional circuitry required for testing and reconfiguration, and the effect of periodic testing on reliability. The stringent requirement on the size of the reconfiguration logic is illustrated by the application of the model to a typical example. Another important result concerns the effect of periodic testing on reliability. It is shown that periodic off-line testing can achieve approximately the same level of reliability as on-line testing, even when the time between tests is many hundreds of hours

    The 30/20 GHz flight experiment system, phase 2. Volume 2: Experiment system description

    Get PDF
    A detailed technical description of the 30/20 GHz flight experiment system is presented. The overall communication system is described with performance analyses, communication operations, and experiment plans. Hardware descriptions of the payload are given with the tradeoff studies that led to the final design. The spacecraft bus which carries the payload is discussed and its interface with the launch vehicle system is described. Finally, the hardwares and the operations of the terrestrial segment are presented
    corecore