907 research outputs found

    A study of pseudorandom test for VLSI

    Get PDF

    Design of On-Chip Self-Testing Signature Register

    Get PDF
    Over the last few years, scan test has turn out to be too expensive to implement for industry standard designs due to increasing test data volume and test time. The test cost of a chip is mainly governed by the resource utilization of Automatic Test Equipment (ATE). Also, it directly depends upon test time that includes time required to load test program, to apply test vectors and to analyze generated test response of the chip. An issue of test time and data volume is increasingly appealing designers to use on-chip test data compactors, either on input side or output side or both. Such techniques significantly address the former issues but have little hold over increasing number of input-outputs under test mode. Further, test pins on DUT are increasing over the generations. Thus, scan channels on test floor are falling short in number for placement of such ICs. To address issues discussed above, we introduce an on-chip self-testing signature register. It comprises a response compactor and a comparator. The compactor compacts large chunk of response data to a small test signature whereas the comparator compares this test signature with desired one. The overall test result for the design is generated on single output pin. Being no storage of test response is demanded, the considerable reduction in ATE memory can be observed. Also, with only single pin to be monitored for test result, the number of tester channels and compare edges on ATE side significantly reduce at the end of the test. This cuts down maintenance and usage cost of test floor and increases its life time. Furthermore reduction in test pins gives scope for DFT engineers to increase number of scan chains so as to further reduce test time

    Development of a predictive electrical motor and pump maintenance system

    Get PDF
    ThesisThis dissertation covers the development and implementation of a predictive maintenance monitoring programme for the Water Supply Directorate of the Department of Water Affairs, Namibia. The maintenance policy in the Directorate was based on a combination of breakdown maintenance and preventative maintenance. Thus maintenance was carried out when a specific type of equipment was forced out of production. The cost of the replacement and repair of equipment increased substantially and a condition-based maintenance system was investigated and implemented. The purpose of condition monitoring maintenance is to find a convenient time for maintenance to be carried out. Different types of condition monitoring technologies exist. After the different types of technologies have been investigated, vibration-based predictive maintenance was chosen. The project includes results from a number of field case studies and proves that vibration analysis can be used to determine the mechanical condition of electrical motors and pumps. The monitoring programme covers a total of 80 pump sets comprising mainly of electrical motors and pumps ranging from 45 to 2 400 kilowatt. In general, the programme is based on the determination of suitable monitoring parameters by taking measurements at regular intervals of the vibration characteristics of a machine. The generalised approach to vibration analysis in a predictive maintenance programme of machinery requires a sound understanding of fundamental theoretical concepts associated with machine. element dynamics and the nature of the dynamic forces and instabilities which excite vibration in electric motors and centrifugal pumps, together with the ability to plan concise experiments to obtain practical data regarding the cause of failure. Machine faults will cause a change in the shape of the vibration frequency spectrum. The cause of the fault can be diagnosed by determining which frequency components have increased and to match them with the different characteristics of vibration. Basically, all machines vibrate at the same characteristic level depending upon the machine's design and operation. As a machine begins to age and deteriorate, vibration increases sporadically or gradually and each machine, regardless of its mechanical design, creates its own unique vibration. A vibration problem can be analysed by reviewing its component frequencies and determining at what frequency the vibration occurs. Using a vibration analyzer, it is possible to measure the frequency and corresponding amplitude of each component. It was found that the greatest vibration normally occurs at the running speed of the machine. It can be concluded that unbalance could be a major cause of this. Misalignment was normally identified at two or three times running speed. Rolling element bearings produce their own high frequency with low amplitude vibration. Defects in rolling element bearings can be separated from the vibration produced by other mechanical components. On sleeve bearings, excessive clearances were found to be the main cause ofvibration, producing many harmonic-related frequencies. Another problem which may arise, is mechanical looseness, of which the amplitude is normally dependent on the amount of looseness and the mechanical design of the machine. This was characterised at twice the running speed with higher than usual harmonics. Resonance is another problem that could cause excessive vibration. Each part of a machine, as well as the machine itself, has a natural frequency and this frequency, relative to a machine's running speed, is of great importance since no machine should be operated in a resonant condition. By utilising a predictive maintenance programme such as vibration monitoring, the condition of vital machinery can be determined effectively. This monitoring system can give early warnings of impending failures, determine the cause of fault and can be used to schedule repairs. Such a system can therefore prevent catastrophic failure, lengthen the life of machinery and reduce maintenance costs. Since installation of the programme, the number of unexpected failures on monitored machines has been greatly reduced and the savings gained from the programme (savings associated with maintenance costs) enabled a pay-back on investments within 18 months of installation

    A Hardware Security Solution against Scan-Based Attacks

    Get PDF
    Scan based Design for Test (DfT) schemes have been widely used to achieve high fault coverage for integrated circuits. The scan technique provides full access to the internal nodes of the device-under-test to control them or observe their response to input test vectors. While such comprehensive access is highly desirable for testing, it is not acceptable for secure chips as it is subject to exploitation by various attacks. In this work, new methods are presented to protect the security of critical information against scan-based attacks. In the proposed methods, access to the circuit containing secret information via the scan chain has been severely limited in order to reduce the risk of a security breach. To ensure the testability of the circuit, a built-in self-test which utilizes an LFSR as the test pattern generator (TPG) is proposed. The proposed schemes can be used as a countermeasure against side channel attacks with a low area overhead as compared to the existing solutions in literature

    Conceptual roles of data in program: analyses and applications

    Get PDF
    Program comprehension is the prerequisite for many software evolution and maintenance tasks. Currently, the research falls short in addressing how to build tools that can use domain-specific knowledge to provide powerful capabilities for extracting valuable information for facilitating program comprehension. Such capabilities are critical for working with large and complex program where program comprehension often is not possible without the help of domain-specific knowledge.;Our research advances the state-of-art in program analysis techniques based on domain-specific knowledge. The program artifacts including variables and methods are carriers of domain concepts that provide the key to understand programs. Our program analysis is directed by domain knowledge stored as domain-specific rules. Our analysis is iterative and interactive. It is based on flexible inference rules and inter-exchangeable and extensible information storage. We designed and developed a comprehensive software environment SeeCORE based on our knowledge-centric analysis methodology. The SeeCORE tool provides multiple views and abstractions to assist in understanding complex programs. The case studies demonstrate the effectiveness of our method. We demonstrate the flexibility of our approach by analyzing two legacy programs in distinct domains

    A statistical theory of digital circuit testability

    Full text link

    Cooperative relaying using USRP and GNU radio

    Get PDF
    Title from PDF of title page, viewed on October 22, 2013Thesis advisor: Cory BeardVitaIncludes bibliographic references (pages 89-91)Thesis (M.S.)--School of Computing and Engineering. University of Missouri--Kansas City, 2013Wireless communication systems have shown a tremendous development in recent years. New technologies are born day to day. With today's technology, users can communicate with each other from any corner of the world. But wireless technologies are often prone to various effects like multipath fading, interference, low signal strength, reduced spectrum efficiency etc. which makes this system less reliable. Because of this reason, researchers are continuously working to develop technologies that can make the performance of a wireless system much better. Cooperative Communications is one of the fastest growing research technologies that can enable efficient spectrum usage and create a reliable network. In traditional networks, the physical layer is only responsible for communication in between two nodes which are more hindered to the challenges of the network. Cooperative Communication creates an extra communication with the help of a Relay in between the terminals which thereby enhances the signal quality. We implement this strategy using GNU Radio and three Radios (USRP-Universal Software Radio peripheral) which act as a Transmitter, a Receiver and a Relay. Our main goal is to verify the communication in between the two Radios (a Direct Link) and implement Cooperative communication by introducing a Relay in between the two radios. The Relay is made to operate on Amplify & Forward and Decode & Forward scenarios. Characteristics like packet error rate (PER), bit error rate (BER) and character error rates are studied with respect to individual scenarios and overall bit error rate (BER) of the system is calculated. Then performance is compared against different scenarios dealing with obstructions, transmit and receive gains, and relaying approaches with the goal of determining which approaches are best in which scenarios.Introduction -- Background -- Design -- Results and analysis -- Conclusion and future work -- Appendi
    corecore