232 research outputs found

    Memory built-in self-repair and correction for improving yield: a review

    Get PDF
    Nanometer memories are highly prone to defects due to dense structure, necessitating memory built-in self-repair as a must-have feature to improve yield. Todayโ€™s system-on-chips contain memories occupying an area as high as 90% of the chip area. Shrinking technology uses stricter design rules for memories, making them more prone to manufacturing defects. Further, using 3D-stacked memories makes the system vulnerable to newer defects such as those coming from through-silicon-vias (TSV) and micro bumps. The increased memory size is also resulting in an increase in soft errors during system operation. Multiple memory repair techniques based on redundancy and correction codes have been presented to recover from such defects and prevent system failures. This paper reviews recently published memory repair methodologies, including various built-in self-repair (BISR) architectures, repair analysis algorithms, in-system repair, and soft repair handling using error correcting codes (ECC). It provides a classification of these techniques based on method and usage. Finally, it reviews evaluation methods used to determine the effectiveness of the repair algorithms. The paper aims to present a survey of these methodologies and prepare a platform for developing repair methods for upcoming-generation memories

    ์ •์  ๋žจ ๋ฐ ํŒŒ์›Œ ๊ฒŒ์ดํŠธ ํšŒ๋กœ์— ๋Œ€ํ•œ ์ „์•• ๋ฐ ๋ณด์กด์šฉ ๊ณต๊ฐ„ ํ• ๋‹น ๋ฌธ์ œ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2021.8. ๊น€ํƒœํ™˜.์นฉ์˜ ์ €์ „๋ ฅ ๋™์ž‘์€ ์ค‘์š”ํ•œ ๋ฌธ์ œ์ด๋ฉฐ, ๊ณต์ •์ด ๋ฐœ์ „ํ•˜๋ฉด์„œ ๊ทธ ์ค‘์š”์„ฑ์€ ์ ์  ์ปค์ง€๊ณ  ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์€ ์นฉ์„ ๊ตฌ์„ฑํ•˜๋Š” ์ •์  ๋žจ(SRAM) ๋ฐ ๋กœ์ง(logic) ๊ฐ๊ฐ์— ๋Œ€ํ•ด์„œ ์ €์ „๋ ฅ์œผ๋กœ ๋™์ž‘์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ๋…ผํ•œ๋‹ค. ์šฐ์„ , ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์นฉ์„ ๋ฌธํ„ฑ ์ „์•• ๊ทผ์ฒ˜์˜ ์ „์••(NTV)์—์„œ ๋™์ž‘์‹œํ‚ค๊ณ ์ž ํ•  ๋•Œ ๋ชจ๋‹ˆํ„ฐ๋ง ํšŒ๋กœ์˜ ์ธก์ •์„ ํ†ตํ•ด ์นฉ ๋‚ด์˜ ๋ชจ๋“  SRAM ๋ธ”๋ก์—์„œ ๋™์ž‘ ์‹คํŒจ๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š๋Š” ์ตœ์†Œ ๋™์ž‘ ์ „์••์„ ์ถ”๋ก ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•œ๋‹ค. ์นฉ์„ NTV ์˜์—ญ์—์„œ ๋™์ž‘์‹œํ‚ค๋Š” ๊ฒƒ์€ ์—๋„ˆ์ง€ ํšจ์œจ์„ฑ์„ ์ฆ๋Œ€์‹œํ‚ฌ ์ˆ˜ ์žˆ๋Š” ๋งค์šฐ ํšจ๊ณผ์ ์ธ ๋ฐฉ๋ฒ• ์ค‘ ํ•˜๋‚˜์ด์ง€๋งŒ SRAM์˜ ๊ฒฝ์šฐ ๋™์ž‘ ์‹คํŒจ ๋•Œ๋ฌธ์— ๋™์ž‘ ์ „์••์„ ๋‚ฎ์ถ”๊ธฐ ์–ด๋ ต๋‹ค. ํ•˜์ง€๋งŒ ์นฉ๋งˆ๋‹ค ์˜ํ–ฅ์„ ๋ฐ›๋Š” ๊ณต์ • ๋ณ€์ด๊ฐ€ ๋‹ค๋ฅด๋ฏ€๋กœ ์ตœ์†Œ ๋™์ž‘ ์ „์••์€ ์นฉ๋งˆ๋‹ค ๋‹ค๋ฅด๋ฉฐ, ๋ชจ๋‹ˆํ„ฐ๋ง์„ ํ†ตํ•ด ์ด๋ฅผ ์ถ”๋ก ํ•ด๋‚ผ ์ˆ˜ ์žˆ๋‹ค๋ฉด ์นฉ๋ณ„๋กœ SRAM์— ์„œ๋กœ ๋‹ค๋ฅธ ์ „์••์„ ์ธ๊ฐ€ํ•ด ์—๋„ˆ์ง€ ํšจ์œจ์„ฑ์„ ๋†’์ผ ์ˆ˜ ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๊ณผ์ •์„ ํ†ตํ•ด ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•œ๋‹ค: (1) ๋””์ž์ธ ์ธํ”„๋ผ ์„ค๊ณ„ ๋‹จ๊ณ„์—์„œ๋Š” SRAM์˜ ์ตœ์†Œ ๋™์ž‘ ์ „์••์„ ์ถ”๋ก ํ•˜๊ณ  ์นฉ ์ƒ์‚ฐ ๋‹จ๊ณ„์—์„œ๋Š” SRAM ๋ชจ๋‹ˆํ„ฐ์˜ ์ธก์ •์„ ํ†ตํ•ด ์ „์••์„ ์ธ๊ฐ€ํ•˜๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•œ๋‹ค; (2) ์นฉ์˜ SRAM ๋น„ํŠธ์…€(bitcell)๊ณผ ์ฃผ๋ณ€ ํšŒ๋กœ๋ฅผ ํฌํ•จํ•œ SRAM ๋ธ”๋ก๋“ค์˜ ๊ณต์ • ๋ณ€์ด๋ฅผ ๋ชจ๋‹ˆํ„ฐ๋งํ•  ์ˆ˜ ์žˆ๋Š” SRAM ๋ชจ๋‹ˆํ„ฐ์™€ SRAM ๋ชจ๋‹ˆํ„ฐ์—์„œ ๋ชจ๋‹ˆํ„ฐ๋งํ•  ๋Œ€์ƒ์„ ์ •์˜ํ•œ๋‹ค; (3) SRAM ๋ชจ๋‹ˆํ„ฐ์˜ ์ธก์ •๊ฐ’์„ ์ด์šฉํ•ด ๊ฐ™์€ ์นฉ์— ์กด์žฌํ•˜๋Š” ๋ชจ๋“  SRAM ๋ธ”๋ก์—์„œ ๋ชฉํ‘œ ์‹ ๋ขฐ์ˆ˜์ค€ ๋‚ด์—์„œ ์ฝ๊ธฐ, ์“ฐ๊ธฐ, ๋ฐ ์ ‘๊ทผ ๋™์ž‘ ์‹คํŒจ๊ฐ€ ๋ฐœ์ƒํ•˜์ง€ ์•Š๋Š” ์ตœ์†Œ ๋™์ž‘ ์ „์••์„ ์ถ”๋ก ํ•œ๋‹ค. ๋ฒค์น˜๋งˆํฌ ํšŒ๋กœ์˜ ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์„ ๋”ฐ๋ผ ์นฉ๋ณ„๋กœ SRAM ๋ธ”๋ก๋“ค์˜ ์ตœ์†Œ ๋™์ž‘ ์ „์••์„ ๋‹ค๋ฅด๊ฒŒ ์ธ๊ฐ€ํ•  ๊ฒฝ์šฐ, ๊ธฐ์กด ๋ฐฉ๋ฒ•๋Œ€๋กœ ๋ชจ๋“  ์นฉ์— ๋™์ผํ•œ ์ „์••์„ ์ธ๊ฐ€ํ•˜๋Š” ๊ฒƒ ๋Œ€๋น„ ์ˆ˜์œจ์€ ๊ฐ™์€ ์ˆ˜์ค€์œผ๋กœ ์œ ์ง€ํ•˜๋ฉด์„œ SRAM ๋น„ํŠธ์…€ ๋ฐฐ์—ด์˜ ์ „๋ ฅ ์†Œ๋ชจ๋ฅผ ๊ฐ์†Œ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Œ์„ ๋ณด์ธ๋‹ค. ๋‘ ๋ฒˆ์งธ๋กœ, ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ํŒŒ์›Œ ๊ฒŒ์ดํŠธ ํšŒ๋กœ์—์„œ ๊ธฐ์กด์˜ ๋ณด์กด์šฉ ๊ณต๊ฐ„ ํ• ๋‹น ๋ฐฉ๋ฒ•๋“ค์ด ์ง€๋‹ˆ๊ณ  ์žˆ๋Š” ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ณ  ๋ˆ„์„ค ์ „๋ ฅ ์†Œ๋ชจ๋ฅผ ๋” ์ค„์ผ ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•๋ก ์„ ์ œ์•ˆํ•œ๋‹ค. ๊ธฐ์กด์˜ ๋ณด์กด์šฉ ๊ณต๊ฐ„ ํ• ๋‹น ๋ฐฉ๋ฒ•์€ ๋ฉ€ํ‹ฐํ”Œ๋ ‰์„œ ํ”ผ๋“œ๋ฐฑ ๋ฃจํ”„๊ฐ€ ์žˆ๋Š” ๋ชจ๋“  ํ”Œ๋ฆฝํ”Œ๋กญ์—๋Š” ๋ฌด์กฐ๊ฑด ๋ณด์กด์šฉ ๊ณต๊ฐ„์„ ํ• ๋‹นํ•ด์•ผ ํ•ด์•ผ ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๋‹ค์ค‘ ๋น„ํŠธ ๋ณด์กด์šฉ ๊ณต๊ฐ„์˜ ์žฅ์ ์„ ์ถฉ๋ถ„ํžˆ ์‚ด๋ฆฌ์ง€ ๋ชปํ•˜๋Š” ๋ฌธ์ œ๊ฐ€ ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฐฉ๋ฒ•์„ ํ†ตํ•ด ๋ณด์กด์šฉ ๊ณต๊ฐ„์„ ์ตœ์†Œํ™”ํ•˜๋Š” ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•œ๋‹ค: (1) ๋ณด์กด์šฉ ๊ณต๊ฐ„ ํ• ๋‹น ๊ณผ์ •์—์„œ ๋ฉ€ํ‹ฐํ”Œ๋ ‰์„œ ํ”ผ๋“œ๋ฐฑ ๋ฃจํ”„๋ฅผ ๋ฌด์‹œํ•  ์ˆ˜ ์žˆ๋Š” ์กฐ๊ฑด์„ ์ œ์‹œํ•˜๊ณ , (2) ํ•ด๋‹น ์กฐ๊ฑด์„ ์ด์šฉํ•ด ๋ฉ€ํ‹ฐํ”Œ๋ ‰์„œ ํ”ผ๋“œ๋ฐฑ ๋ฃจํ”„๊ฐ€ ์žˆ๋Š” ํ”Œ๋ฆฝํ”Œ๋กญ์ด ๋งŽ์ด ์กด์žฌํ•˜๋Š” ํšŒ๋กœ์—์„œ ๋ณด์กด์šฉ ๊ณต๊ฐ„์„ ์ตœ์†Œํ™”ํ•œ๋‹ค; (3) ์ถ”๊ฐ€๋กœ, ํ”Œ๋ฆฝํ”Œ๋กญ์— ์ด๋ฏธ ํ• ๋‹น๋œ ๋ณด์กด์šฉ ๊ณต๊ฐ„ ์ค‘ ์ผ๋ถ€๋ฅผ ์ œ๊ฑฐํ•  ์ˆ˜ ์žˆ๋Š” ์กฐ๊ฑด์„ ์ฐพ๊ณ , ์ด๋ฅผ ์ด์šฉํ•ด ๋ณด์กด์šฉ ๊ณต๊ฐ„์„ ๋” ๊ฐ์†Œ์‹œํ‚จ๋‹ค. ๋ฒค์น˜๋งˆํฌ ํšŒ๋กœ์˜ ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•๋ก ์ด ๊ธฐ์กด์˜ ๋ณด์กด์šฉ ๊ณต๊ฐ„ ํ• ๋‹น ๋ฐฉ๋ฒ•๋ก ๋ณด๋‹ค ๋” ์ ์€ ๋ณด์กด์šฉ ๊ณต๊ฐ„์„ ํ• ๋‹นํ•˜๋ฉฐ, ๋”ฐ๋ผ์„œ ์นฉ์˜ ๋ฉด์  ๋ฐ ์ „๋ ฅ ์†Œ๋ชจ๋ฅผ ๊ฐ์†Œ์‹œํ‚ฌ ์ˆ˜ ์žˆ์Œ์„ ๋ณด์ธ๋‹ค.Low power operation of a chip is an important issue, and its importance is increasing as the process technology advances. This dissertation addresses the methodology of operating at low power for each of the SRAM and logic constituting the chip. Firstly, we propose a methodology to infer the minimum operating voltage at which SRAM failure does not occur in all SRAM blocks in the chip operating on near threshold voltage (NTV) regime through the measurement of a monitoring circuit. Operating the chip on NTV regime is one of the most effective ways to increase energy efficiency, but in case of SRAM, it is difficult to lower the operating voltage because of SRAM failure. However, since the process variation on each chip is different, the minimum operating voltage is also different for each chip. If it is possible to infer the minimum operating voltage of SRAM blocks of each chip through monitoring, energy efficiency can be increased by applying different voltage. In this dissertation, we propose a new methodology of resolving this problem. Specifically, (1) we propose to infer minimum operation voltage of SRAM in design infra development phase, and assign the voltage using measurement of SRAM monitor in silicon production phase; (2) we define a SRAM monitor and features to be monitored that can monitor process variation on SRAM blocks including SRAM bitcell and peripheral circuits; (3) we propose a new methodology of inferring minimum operating voltage of SRAM blocks in a chip that does not cause read, write, and access failures under a target confidence level. Through experiments with benchmark circuits, it is confirmed that applying different voltage to SRAM blocks in each chip that inferred by our proposed methodology can save overall power consumption of SRAM bitcell array compared to applying same voltage to SRAM blocks in all chips, while meeting the same yield target. Secondly, we propose a methodology to resolve the problem of the conventional retention storage allocation methods and thereby further reduce leakage power consumption of power gated circuit. Conventional retention storage allocation methods have problem of not fully utilizing the advantage of multi-bit retention storage because of the unavoidable allocation of retention storage on flip-flops with mux-feedback loop. In this dissertation, we propose a new methodology of breaking the bottleneck of minimizing the state retention storage. Specifically, (1) we find a condition that mux-feedback loop can be disregarded during the retention storage allocation; (2) utilizing the condition, we minimize the retention storage of circuits that contain many flip-flops with mux-feedback loop; (3) we find a condition to remove some of the retention storage already allocated to each of flip-flops and propose to further reduce the retention storage. Through experiments with benchmark circuits, it is confirmed that our proposed methodology allocates less retention storage compared to the state-of-the-art methods, occupying less cell area and consuming less power.1 Introduction 1 1.1 Low Voltage SRAM Monitoring Methodology 1 1.2 Retention Storage Allocation on Power Gated Circuit 5 1.3 Contributions of this Dissertation 8 2 SRAM On-Chip Monitoring Methodology for High Yield and Energy Efficient Memory Operation at Near Threshold Voltage 13 2.1 SRAM Failures 13 2.1.1 Read Failure 13 2.1.2 Write Failure 15 2.1.3 Access Failure 16 2.1.4 Hold Failure 16 2.2 SRAM On-chip Monitoring Methodology: Bitcell Variation 18 2.2.1 Overall Flow 18 2.2.2 SRAM Monitor and Monitoring Target 18 2.2.3 Vfail to Vddmin Inference 22 2.3 SRAM On-chip Monitoring Methodology: Peripheral Circuit IR Drop and Variation 29 2.3.1 Consideration of IR Drop 29 2.3.2 Consideration of Peripheral Circuit Variation 30 2.3.3 Vddmin Prediction including Access Failure Prohibition 33 2.4 Experimental Results 41 2.4.1 Vddmin Considering Read and Write Failures 42 2.4.2 Vddmin Considering Read/Write and Access Failures 45 2.4.3 Observation for Practical Use 45 3 Allocation of Always-On State Retention Storage for Power Gated Circuits - Steady State Driven Approach 49 3.1 Motivations and Analysis 49 3.1.1 Impact of Self-loop on Power Gating 49 3.1.2 Circuit Behavior Before Sleeping 52 3.1.3 Wakeup Latency vs. Retention Storage 54 3.2 Steady State Driven Retention Storage Allocation 56 3.2.1 Extracting Steady State Self-loop FFs 57 3.2.2 Allocating State Retention Storage 59 3.2.3 Designing and Optimizing Steady State Monitoring Logic 59 3.2.4 Analysis of the Impact of Steady State Monitoring Time on the Standby Power 63 3.3 Retention Storage Refinement Utilizing Steadiness 65 3.3.1 Extracting Flip-flops for Retention Storage Refinement 66 3.3.2 Designing State Monitoring Logic and Control Signals 68 3.4 Experimental Results 73 3.4.1 Comparison of State Retention Storage 75 3.4.2 Comparison of Power Consumption 79 3.4.3 Impact on Circuit Performance 82 3.4.4 Support for Immediate Power Gating 83 4 Conclusions 89 4.1 Chapter 2 89 4.2 Chapter 3 90๋ฐ•

    Avoiding core's DUE & SDC via acoustic wave detectors and tailored error containment and recovery

    Get PDF
    The trend of downsizing transistors and operating voltage scaling has made the processor chip more sensitive against radiation phenomena making soft errors an important challenge. New reliability techniques for handling soft errors in the logic and memories that allow meeting the desired failures-in-time (FIT) target are key to keep harnessing the benefits of Moore's law. The failure to scale the soft error rate caused by particle strikes, may soon limit the total number of cores that one may have running at the same time. This paper proposes a light-weight and scalable architecture to eliminate silent data corruption errors (SDC) and detected unrecoverable errors (DUE) of a core. The architecture uses acoustic wave detectors for error detection. We propose to recover by confining the errors in the cache hierarchy, allowing us to deal with the relatively long detection latencies. Our results show that the proposed mechanism protects the whole core (logic, latches and memory arrays) incurring performance overhead as low as 0.60%. ยฉ 2014 IEEE.Peer ReviewedPostprint (author's final draft

    Reduced Galloping Column Algorithm For Memory Testing

    Get PDF
    Memory testing is significantly important nowadays especially in SOCโ€™s design, due to their rapid growth in the memory density and design complexity in smaller chip area and low power design. Thus, test time in memory testing is a key challenge to accelerate time to market, high yield and low test cost in high volume manufacturing. Test time reduction in memory testing is important in industry, as test cost is directly related to validation time of each product on the tester. There are lots of memory algorithms used for memory testing, including the galloping column algorithm (GalCol). The GalCol algorithm test is important to detect unique coupling and transition faults. However, the existing GalCol algorithm takes huge test time due to its test complexity. To overcome the test time issue in industry, reduced GalCol algorithms with solid data background are proposed. The reduced GalCol algoritms have similar test behavior as original GalCol algorithm with major difference in the number of galloping of the target cells. The galloping of target cells are reduced to first and last 8, 16 and 32 of cells of every base cell. This project is progressed in two stages, which are the software development using INTEL software and Synopsys tool and test implementation on INTEL production flow. These algorithm are verified on 15 units of 64KB L2 SRAM memory. In this project, test time reduction and consistent pass fail test results are achieved in the reduced GalCol algorithm tests. The GalCol X8 algorithm obtains the highest test time reduction of about 79.5% at 600MHz and 75.7% at 1.6GHz with consistent pass or fail test results comparable to original GalCol algorithm in the HVM test flow

    FPGA ARCHITECTURE AND VERIFICATION OF BUILT IN SELF-TEST (BIST) FOR 32-BIT ADDER/SUBTRACTER USING DE0-NANO FPGA AND ANALOG DISCOVERY 2 HARDWARE

    Get PDF
    The integrated circuit (IC) is an integral part of everyday modern technology, and its application is very attractive to hardware and software design engineers because of its versatility, integration, power consumption, cost, and board area reduction. IC is available in various types such as Field Programming Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), System on Chip (SoC) architecture, Digital Signal Processing (DSP), microcontrollers (ฮผC), and many more. With technology demand focused on faster, low power consumption, efficient IC application, design engineers are facing tremendous challenges in developing and testing integrated circuits that guaranty functionality, high fault coverage, and reliability as the transistor technology is shrinking to the point where manufacturing defects of ICs are affecting yield which associates with the increased cost of the part. The competitive IC market is pressuring manufactures of ICs to develop and market IC in a relatively quick turnaround which in return requires design and verification engineers to develop an integrated self-test structure that would ensure fault-free and the quality product is delivered on the market. 70-80% of IC design is spent on verification and testing to ensure high quality and reliability for the enduser. To test complex and sophisticated IC designs, the verification engineers must produce laborious and costly test fixtures which affect the cost of the part on the competitive market. To avoid increasing the part cost due to yield and test time to the end-user and to keep up with the competitive market many IC design engineers are deviating from complex external test fixture approach and are focusing on integrating Built-in Self-Test (BIST) or Design for Test (DFT) techniques onto ICโ€™s which would reduce time to market but still guarantee high coverage for the product. Understanding the BIST, the architecture, as well as the application of IC, must be understood before developing IC. The architecture of FPGA is elaborated in this paper followed by several BIST techniques and applications of those BIST relative to FPGA, SoC, analog to digital (ADC), or digital to analog converters (DAC) that are integrated on IC. Paper is concluded with verification of BIST for the 32-bit adder/subtracter designed in Quartus II software using the Analog Discovery 2 module as stimulus and DE0-NANO FPGA board for verification

    Fault and Defect Tolerant Computer Architectures: Reliable Computing With Unreliable Devices

    Get PDF
    This research addresses design of a reliable computer from unreliable device technologies. A system architecture is developed for a fault and defect tolerant (FDT) computer. Trade-offs between different techniques are studied and yield and hardware cost models are developed. Fault and defect tolerant designs are created for the processor and the cache memory. Simulation results for the content-addressable memory (CAM)-based cache show 90% yield with device failure probabilities of 3 x 10(-6), three orders of magnitude better than non fault tolerant caches of the same size. The entire processor achieves 70% yield with device failure probabilities exceeding 10(-6). The required hardware redundancy is approximately 15 times that of a non-fault tolerant design. While larger than current FT designs, this architecture allows the use of devices much more likely to fail than silicon CMOS. As part of model development, an improved model is derived for NAND Multiplexing. The model is the first accurate model for small and medium amounts of redundancy. Previous models are extended to account for dependence between the inputs and produce more accurate results

    A Total Self Checking Comparator Implementable on FPGAS Using Bist Technology

    Get PDF
    an integrated circuits (IC) "manufacturing tests" may be made easier to administer with the use of design for testability (DFT). Integrated circuits' embedded memory tests make use of the TSC (TSC) approach. We have shown the TSC method and several algorithms used in TSC for the purpose of testing embedded memory in this article. An address generator, controller, comparator, and memory are the four main components of this kind of memory TSC technology. This paper details the three memory TSC controller implementation techniques. The memory TSC controller is modelled in Verilog HDL, and its accuracy is checked using the RTL compiler before synthesis. Here we provide a way to build TSC comparators for TSC systems that may be implemented on FPGAsโ€”totally self-checking (TSC) systemsโ€”that can be used online. By directly measuring the output of each lookup table (LUT), this approach may be utilised to do comprehensive online diagnostics of all LUTs. This entails mapping the basic components of the comparator with a limited number of test patterns. With our technique, we can achieve exhaustive diagnosis with a small number of test patterns on the order of n [O(n)] (where n is the input number to the comparator) while yet covering all bases 100% of the time, even if we are just aware of the LUT's specs and not its exact structure. For systems that need absolute reliability, FPGAs will be a perfect fit. Our experiment also included a single-event upset (SEU) induced by neutron radiation to validate the soft error rate (SER) in a field-programmable gate array (FPGA) based on static random-access memory (SRAM)

    Fault-tolerant fpga for mission-critical applications.

    Get PDF
    One of the devices that play a great role in electronic circuits design, specifically safety-critical design applications, is Field programmable Gate Arrays (FPGAs). This is because of its high performance, re-configurability and low development cost. FPGAs are used in many applications such as data processing, networks, automotive, space and industrial applications. Negative impacts on the reliability of such applications result from moving to smaller feature sizes in the latest FPGA architectures. This increases the need for fault-tolerant techniques to improve reliability and extend system lifetime of FPGA-based applications. In this thesis, two fault-tolerant techniques for FPGA-based applications are proposed with a built-in fault detection region. A low cost fault detection scheme is proposed for detecting faults using the fault detection region used in both schemes. The fault detection scheme primarily detects open faults in the programmable interconnect resources in the FPGAs. In addition, Stuck-At faults and Single Event Upsets (SEUs) fault can be detected. For fault recovery, each scheme has its own fault recovery approach. The first approach uses a spare module and a 2-to-1 multiplexer to recover from any fault detected. On the other hand, the second approach recovers from any fault detected using the property of Partial Reconfiguration (PR) in the FPGAs. It relies on identifying a Partially Reconfigurable block (P_b) in the FPGA that is used in the recovery process after the first faulty module is identified in the system. This technique uses only one location to recover from faults in any of the FPGAโ€™s modules and the FPGA interconnects. Simulation results show that both techniques can detect and recover from open faults. In addition, Stuck-At faults and Single Event Upsets (SEUs) fault can also be detected. Finally, both techniques require low area overhead

    Techniques for Improving Security and Trustworthiness of Integrated Circuits

    Get PDF
    The integrated circuit (IC) development process is becoming increasingly vulnerable to malicious activities because untrusted parties could be involved in this IC development flow. There are four typical problems that impact the security and trustworthiness of ICs used in military, financial, transportation, or other critical systems: (i) Malicious inclusions and alterations, known as hardware Trojans, can be inserted into a design by modifying the design during GDSII development and fabrication. Hardware Trojans in ICs may cause malfunctions, lower the reliability of ICs, leak confidential information to adversaries or even destroy the system under specifically designed conditions. (ii) The number of circuit-related counterfeiting incidents reported by component manufacturers has increased significantly over the past few years with recycled ICs contributing the largest percentage of the total reported counterfeiting incidents. Since these recycled ICs have been used in the field before, the performance and reliability of such ICs has been degraded by aging effects and harsh recycling process. (iii) Reverse engineering (RE) is process of extracting a circuitโ€™s gate-level netlist, and/or inferring its functionality. The RE causes threats to the design because attackers can steal and pirate a design (IP piracy), identify the device technology, or facilitate other hardware attacks. (iv) Traditional tools for uniquely identifying devices are vulnerable to non-invasive or invasive physical attacks. Securing the ID/key is of utmost importance since leakage of even a single device ID/key could be exploited by an adversary to hack other devices or produce pirated devices. In this work, we have developed a series of design and test methodologies to deal with these four challenging issues and thus enhance the security, trustworthiness and reliability of ICs. The techniques proposed in this thesis include: a path delay fingerprinting technique for detection of hardware Trojans, recycled ICs, and other types counterfeit ICs including remarked, overproduced, and cloned ICs with their unique identifiers; a Built-In Self-Authentication (BISA) technique to prevent hardware Trojan insertions by untrusted fabrication facilities; an efficient and secure split manufacturing via Obfuscated Built-In Self-Authentication (OBISA) technique to prevent reverse engineering by untrusted fabrication facilities; and a novel bit selection approach for obtaining the most reliable bits for SRAM-based physical unclonable function (PUF) across environmental conditions and silicon aging effects

    Design Disjunction for Resilient Reconfigurable Hardware

    Get PDF
    Contemporary reconfigurable hardware devices have the capability to achieve high performance, power efficiency, and adaptability required to meet a wide range of design goals. With scaling challenges facing current complementary metal oxide semiconductor (CMOS), new concepts and methodologies supporting efficient adaptation to handle reliability issues are becoming increasingly prominent. Reconfigurable hardware and their ability to realize self-organization features are expected to play a key role in designing future dependable hardware architectures. However, the exponential increase in density and complexity of current commercial SRAM-based field-programmable gate arrays (FPGAs) has escalated the overhead associated with dynamic runtime design adaptation. Traditionally, static modular redundancy techniques are considered to surmount this limitation; however, they can incur substantial overheads in both area and power requirements. To achieve a better trade-off among performance, area, power, and reliability, this research proposes design-time approaches that enable fine selection of redundancy level based on target reliability goals and autonomous adaptation to runtime demands. To achieve this goal, three studies were conducted: First, a graph and set theoretic approach, named Hypergraph-Cover Diversity (HCD), is introduced as a preemptive design technique to shift the dominant costs of resiliency to design-time. In particular, union-free hypergraphs are exploited to partition the reconfigurable resources pool into highly separable subsets of resources, each of which can be utilized by the same synthesized application netlist. The diverse implementations provide reconfiguration-based resilience throughout the system lifetime while avoiding the significant overheads associated with runtime placement and routing phases. Evaluation on a Motion-JPEG image compression core using a Xilinx 7-series-based FPGA hardware platform has demonstrated the potential of the proposed FT method to achieve 37.5% area saving and up to 66% reduction in power consumption compared to the frequently-used TMR scheme while providing superior fault tolerance. Second, Design Disjunction based on non-adaptive group testing is developed to realize a low-overhead fault tolerant system capable of handling self-testing and self-recovery using runtime partial reconfiguration. Reconfiguration is guided by resource grouping procedures which employ non-linear measurements given by the constructive property of f-disjunctness to extend runtime resilience to a large fault space and realize a favorable range of tradeoffs. Disjunct designs are created using the mosaic convergence algorithm developed such that at least one configuration in the library evades any occurrence of up to d resource faults, where d is lower-bounded by f. Experimental results for a set of MCNC and ISCAS benchmarks have demonstrated f-diagnosability at the individual slice level with average isolation resolution of 96.4% (94.4%) for f=1 (f=2) while incurring an average critical path delay impact of only 1.49% and area cost roughly comparable to conventional 2-MR approaches. Finally, the proposed Design Disjunction method is evaluated as a design-time method to improve timing yield in the presence of large random within-die (WID) process variations for application with a moderately high production capacity
    • โ€ฆ
    corecore