1,299 research outputs found
A comparison of classical scheduling approaches in power-constrained block-test scheduling
Classical scheduling approaches are applied here to overcome the problem of unequal-length block-test scheduling under power dissipation constraints. List scheduling-like approaches are proposed first as greedy algorithms to tackle the fore mentioned problem. Then, distribution-graph based approaches are described in order to achieve balanced test concurrency and test power dissipation. An extended tree growing technique is also used in combination with these classical approaches in order to improve the test concurrency having assigned power dissipation limits. A comparison between the results of the test scheduling experiments highlights the advantages and disadvantages of applying different classical scheduling algorithms to the power-constrained test scheduling proble
Distribution-graph based approach and extended tree growing technique in power-constrained block-test scheduling
A distribution-graph based scheduling algorithm is proposed together with an extended tree growing technique to deal with the problem of unequal-length block-test scheduling under power dissipation constraints. The extended tree growing technique is used in combination with the classical scheduling approach in order to improve the test concurrency having assigned power dissipation limits. Its goal is to achieve a balanced test power dissipation by employing a least mean square error function. The least mean square error function is a distribution-graph based global priority function. Test scheduling examples and experiments highlight in the end the efficiency of this approach towards a system-level test scheduling algorithm
Design a Low Power Built in Self-Test (BIST) Architecture for Fast Multiplier and Optimize in Terms of Real Time Functionality
Aiming low power during testing, Maypresent a methodology for derivingBIST Architecture forfast Multipliers. In my propose Researchseveral design rules for designing the Wallace tree in order to be fully testable under the cell fault model. The proposed low power BISTArchitecture for the derived multipliers is achieved by: (i) IntroducingTest Pattern Generators (ii) Properly assigning the TPGsoutputs to the multiplier inputs and(iii) Significantly reducing the test vector length. In this work, I have implemented 4bit * 4bit Multiplier with many test pattern generators (TPG) alternative. A BIST TPG Architecture was use of 6 bit counter. I have calculated operation speed, time delay, area, power consumption for Design. Reductionof power dissipationachieved byproperly assigning the TPG output to the multiplier input,significantly reducing the test set length, suitableTPG built of a6-bit counter
DOI: 10.17762/ijritcc2321-8169.150317
Thermal-Safe Test Scheduling for Core-Based System-on-a-Chip Integrated Circuits
Overheating has been acknowledged as a major problem during the testing of complex system-on-chip (SOC) integrated circuits. Several power-constrained test scheduling solutions have been recently proposed to tackle this problem during system integration. However, we show that these approaches cannot guarantee hot-spot-free test schedules because they do not take into account the non-uniform distribution of heat dissipation across the die and the physical adjacency of simultaneously active cores. This paper proposes a new test scheduling approach that is able to produce short test schedules and guarantee thermal-safety at the same time. Two thermal-safe test scheduling algorithms are proposed. The first algorithm computes an exact (shortest) test schedule that is guaranteed to satisfy a given maximum temperature constraint. The second algorithm is a heuristic intended for complex systems with a large number of embedded cores, for which the exact thermal-safe test scheduling algorithm may not be feasible. Based on a low-complexity test session thermal cost model, this algorithm produces near-optimal length test schedules with significantly less computational effort compared to the optimal algorithm
Integration of a Digital Built-in Self-Test for On-Chip Memories
The ability of testing on-chip circuitry is extremely essential to ASIC implemen- tations today. However, providing functional tests and verification for on-chip (embedded) memories always poses a huge number of challenges to the designer. Therefore, a co-existing automated built-in self-test block with the Design Under Test (DUT) seems crucial to provide comprehensive, efficient and robust testing features. The target DUT of this thesis project is the state-of-the-arts Ultra Low Power (ULP) dual-port SRAMs designed in ASIC group of EIT department at Lund University. This thesis starts from system RTL modeling and verification from an earlier project, and then goes through ASIC design phase in 28 nm FD-SOI technology from ST-Microelectronics. All scripts during the ASIC design phase are developed in TCL. This design is implemented with multiple power domains (using CPF approach and introducing level-shifters at crossing-points between domains) and multiple clock sources in order to make it possible to perform various measurements with a high reliability on different flavours of a dual-port SRAM.This design is able to reduce dramatically the complexity of verification and measurement to integrated memories. This digital integrated circuit (IC) is developed as an application-specific IC (ASIC) chip for functional verification of integrated memories and measuring them in different aspects such as power consumption. The design is automated and capable of being reconfigured easily in terms of required actions and data for testing on-chip memories. Put it in other words, this design has automated and optimized the generation of what data to be stored on which location on memories as well as how they have been treated and interpreted later on. For instance, it refreshes and delivers different operation modes and working patterns to the entire test system in order to fully utilize integrated memories, of which such an automation is instructed by the stimuli to the chip. Besides, the pattern generation of the stimuli is implemented on MATLAB in an automated way. Due to constant advancements in chip manufacturing technology, more devices are squeezed into the same silicon area. Meaning that in order to monitor more internal signals introduced by the increased complexity of the circuits, more dedicated input/output ports (the physical interface between the chip internal signals and outside world) are required, that makes the chip bonding and testing in the future difficult and time-consuming. Additionally, memories usually have a bigger number of pins for signal reactions than other circuit blocks do, the method of dealing with so many pins should also be taken into account. Thus, a few techniques are adopted in this system to assist the designers deal with all mentioned issues. Once the ASIC chip has been fabricated (manufactured) and bonded, the on-chip memories can be tested directly on a printed circuit board in a simple and flexible way: Once test instruction input is loaded into the chip, the system starts to update the system settings and then to generate the internal configurations(parameters) so that all different operations, modes or instructions related to memory testing are automatically processed
Design-for-delay-testability techniques for high-speed digital circuits
The importance of delay faults is enhanced by the ever increasing clock rates and decreasing geometry sizes of nowadays' circuits. This thesis focuses on the development of Design-for-Delay-Testability (DfDT) techniques for high-speed circuits and embedded cores. The rising costs of IC testing and in particular the costs of Automatic Test Equipment are major concerns for the semiconductor industry. To reverse the trend of rising testing costs, DfDT is\ud
getting more and more important
A novel scan segmentation design method for avoiding shift timing failure in scan testing
ITC : 2011 IEEE International Test Conference , 20-22 Sep. 2011 , Anaheim, CA, USAHigh power consumption in scan testing can cause undue yield loss which has increasingly become a serious problem for deep-submicron VLSI circuits. Growing evidence attributes this problem to shift timing failures, which are primarily caused by excessive switching activity in the proximities of clock paths that tends to introduce severe clock skew due to IR-drop-induced delay increase. This paper is the first of its kind to address this critical issue with a novel layout-aware scheme based on scan segmentation design, called LCTI-SS (Low-Clock-Tree-Impact Scan Segmentation). An optimal combination of scan segments is identified for simultaneous clocking so that the switching activity in the proximities of clock trees is reduced while maintaining the average power reduction effect on conventional scan segmentation. Experimental results on benchmark and industrial circuits have demonstrated the advantage of the LCTI-SS scheme
Probabilistic Balancing of Fault Coverage and Test Cost in Combined Built-In Self-Test/Automated Test Equipment Testing Environment
As design and test complexities of SoCs ever intensify, the balanced utilization of combined built-in self-test (BIST) and automated test equipment (ATE) testing becomes desirable to meet the required minimum-fault-coverage while maintaining an acceptable cost overhead. The cost associated with combined BIST/ATE testing of such systems mainly consists of 1) the cost induced by the BIST area overhead and 2) the cost induced by the overall testing time. In general, BIST is significantly faster than ATE, while it can provide only limited fault-coverage, and driving higher fault-coverage from BIST means additional area cost overhead. On the other hand, higher fault-coverage can be easily achieved from ATE, but excessive use of ATE results in additional test time. This paper proposes a novel probabilistic method to balance the fault-coverage and the test overhead costs in a combined BIST/ATE test environment. The proposed technique is then applied to two BIST/ATE test scenarios to find the optimal fault-coverage/cost combinations
- âŠ