5,001 research outputs found

    Multilevel Clustering Fault Model for IC Manufacture

    Full text link
    A hierarchical approach to the construction of compound distributions for process-induced faults in IC manufacture is proposed. Within this framework, the negative binomial distribution is treated as level-1 models. The hierarchical approach to fault distribution offers an integrated picture of how fault density varies from region to region within a wafer, from wafer to wafer within a batch, and so on. A theory of compound-distribution hierarchies is developed by means of generating functions. A study of correlations, which naturally appears in microelectronics due to the batch character of IC manufacture, is proposed. Taking these correlations into account is of significant importance for developing procedures for statistical quality control in IC manufacture. With respect to applications, hierarchies of yield means and yield probability-density functions are considered.Comment: 10 pages, the International Conference "Micro- and Nanoelectronics- 2003" (ICMNE-2003),Zvenigorod, Moscow district, Russia, October 6-10, 200

    A Review of Bayesian Methods in Electronic Design Automation

    Full text link
    The utilization of Bayesian methods has been widely acknowledged as a viable solution for tackling various challenges in electronic integrated circuit (IC) design under stochastic process variation, including circuit performance modeling, yield/failure rate estimation, and circuit optimization. As the post-Moore era brings about new technologies (such as silicon photonics and quantum circuits), many of the associated issues there are similar to those encountered in electronic IC design and can be addressed using Bayesian methods. Motivated by this observation, we present a comprehensive review of Bayesian methods in electronic design automation (EDA). By doing so, we hope to equip researchers and designers with the ability to apply Bayesian methods in solving stochastic problems in electronic circuits and beyond.Comment: 24 pages, a draft version. We welcome comments and feedback, which can be sent to [email protected]

    Delay Measurements and Self Characterisation on FPGAs

    No full text
    This thesis examines new timing measurement methods for self delay characterisation of Field-Programmable Gate Arrays (FPGAs) components and delay measurement of complex circuits on FPGAs. Two novel measurement techniques based on analysis of a circuit's output failure rate and transition probability is proposed for accurate, precise and efficient measurement of propagation delays. The transition probability based method is especially attractive, since it requires no modifications in the circuit-under-test and requires little hardware resources, making it an ideal method for physical delay analysis of FPGA circuits. The relentless advancements in process technology has led to smaller and denser transistors in integrated circuits. While FPGA users benefit from this in terms of increased hardware resources for more complex designs, the actual productivity with FPGA in terms of timing performance (operating frequency, latency and throughput) has lagged behind the potential improvements from the improved technology due to delay variability in FPGA components and the inaccuracy of timing models used in FPGA timing analysis. The ability to measure delay of any arbitrary circuit on FPGA offers many opportunities for on-chip characterisation and physical timing analysis, allowing delay variability to be accurately tracked and variation-aware optimisations to be developed, reducing the productivity gap observed in today's FPGA designs. The measurement techniques are developed into complete self measurement and characterisation platforms in this thesis, demonstrating their practical uses in actual FPGA hardware for cross-chip delay characterisation and accurate delay measurement of both complex combinatorial and sequential circuits, further reinforcing their positions in solving the delay variability problem in FPGAs

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Exploiting Application Behaviors for Resilient Static Random Access Memory Arrays in the Near-Threshold Computing Regime

    Get PDF
    Near-Threshold Computing embodies an intriguing choice for mobile processors due to the promise of superior energy efficiency, extending the battery life of these devices while reducing the peak power draw. However, process, voltage, and temperature variations cause a significantly high failure rate of Level One cache cells in the near-threshold regime a stark contrast to designs in the super-threshold regime, where fault sites are rare. This thesis work shows that faulty cells in the near-threshold regime are highly clustered in certain regions of the cache. In addition, popular mobile benchmarks are studied to investigate the impact of run-time workloads on timing faults manifestation. A technique to mitigate the run-time faults is proposed. This scheme maps frequently used data to healthy cache regions by exploiting the application cache behaviors. The results show up to 78% gain in performance over two other state-of-the-art techniques

    The doctoral research abstracts. Vol:12 2017 / Institute of Graduate Studies, UiTM

    Get PDF
    Foreword: Congratulation to IGS on your continuous efforts to publish the 12th issue of the Doctoral Research Abstracts which highlights research in various disciplines from science and technology, business and administration to social sciences and humanities. This research abstract features the abstracts from 71 PhD doctorates who will receive their scrolls in this 87th UiTM momentous convocation ceremony. To the 71 doctorates, you have most certainly done UiTM proud by journeying through the scholarly world with its endless challenges and obstacles, and by persevering right till the very end. Graduands, your success in achieving the highest academic qualification has demonstrated that you have indeed engineered your destiny well. The action of registering for a PhD program was not by chance but by choice. It was a choice made to realise your self-actualization level that is the highest level in Maslow’s Hierarchy of Needs, while at the same time unleashing your potential in scholarly research. Again, congratulations to all PhD graduates. As you leave the university as alumni we hope a new relationship will be fostered between you and the faculty in soaring UiTM to greater heights. I wish you all the best in your future endeavor. Keep UiTM close to your heart and be our ambassador wherever you go. / Prof Emeritus Dato’ Dr Hassan Said Vice Chancellor Universiti Teknologi MAR

    Design and application of reconfigurable circuits and systems

    No full text
    Open Acces

    ENHANCEMENT OF MARKOV RANDOM FIELD MECHANISM TO ACHIEVE FAULT-TOLERANCE IN NANOSCALE CIRCUIT DESIGN

    Get PDF
    As the MOSFET dimensions scale down towards nanoscale level, the reliability of circuits based on these devices decreases. Hence, designing reliable systems using these nano-devices is becoming challenging. Therefore, a mechanism has to be devised that can make the nanoscale systems perform reliably using unreliable circuit components. The solution is fault-tolerant circuit design. Markov Random Field (MRF) is an effective approach that achieves fault-tolerance in integrated circuit design. The previous research on this technique suffers from limitations at the design, simulation and implementation levels. As improvements, the MRF fault-tolerance rules have been validated for a practical circuit example. The simulation framework is extended from thermal to a combination of thermal and random telegraph signal (RTS) noise sources to provide a more rigorous noise environment for the simulation of circuits build on nanoscale technologies. Moreover, an architecture-level improvement has been proposed in the design of previous MRF gates. The redesigned MRF is termed as Improved-MRF. The CMOS, MRF and Improved-MRF designs were simulated under application of highly noisy inputs. On the basis of simulations conducted for several test circuits, it is found that Improved-MRF circuits are 400 whereas MRF circuits are only 10 times more noise-tolerant than the CMOS alternatives. The number of transistors, on the other hand increased from a factor of 9 to 15 from MRF to Improved-MRF respectively (as compared to the CMOS). Therefore, in order to provide a trade-off between reliability and the area overhead required for obtaining a fault-tolerant circuit, a novel parameter called as ‘Reliable Area Index’ (RAI) is introduced in this research work. The value of RAI exceeds around 1.3 and 40 times for MRF and Improved-MRF respectively as compared to CMOS design which makes Improved- MRF to be still 30 times more efficient circuit design than MRF in terms of maintaining a suitable trade-off between reliability and area-consumption of the circuit

    Investigation into yield and reliability enhancement of TSV-based three-dimensional integration circuits

    No full text
    Three dimensional integrated circuits (3D ICs) have been acknowledged as a promising technology to overcome the interconnect delay bottleneck brought by continuous CMOS scaling. Recent research shows that through-silicon-vias (TSVs), which act as vertical links between layers, pose yield and reliability challenges for 3D design. This thesis presents three original contributions.The first contribution presents a grouping-based technique to improve the yield of 3D ICs under manufacturing TSV defects, where regular and redundant TSVs are partitioned into groups. In each group, signals can select good TSVs using rerouting multiplexers avoiding defective TSVs. Grouping ratio (regular to redundant TSVs in one group) has an impact on yield and hardware overhead. Mathematical probabilistic models are presented for yield analysis under the influence of independent and clustering defect distributions. Simulation results using MATLAB show that for a given number of TSVs and TSV failure rate, careful selection of grouping ratio results in achieving 100% yield at minimal hardware cost (number of multiplexers and redundant TSVs) in comparison to a design that does not exploit TSV grouping ratios. The second contribution presents an efficient online fault tolerance technique based on redundant TSVs, to detect TSV manufacturing defects and address thermal-induced reliability issue. The proposed technique accounts for both fault detection and recovery in the presence of three TSV defects: voids, delamination between TSV and landing pad, and TSV short-to-substrate. Simulations using HSPICE and ModelSim are carried out to validate fault detection and recovery. Results show that regular and redundant TSVs can be divided into groups to minimise area overhead without affecting the fault tolerance capability of the technique. Synthesis results using 130-nm design library show that 100% repair capability can be achieved with low area overhead (4% for the best case). The last contribution proposes a technique with joint consideration of temperature mitigation and fault tolerance without introducing additional redundant TSVs. This is achieved by reusing spare TSVs that are frequently deployed for improving yield and reliability in 3D ICs. The proposed technique consists of two steps: TSV determination step, which is for achieving optimal partition between regular and spare TSVs into groups; The second step is TSV placement, where temperature mitigation is targeted while optimizing total wirelength and routing difference. Simulation results show that using the proposed technique, 100% repair capability is achieved across all (five) benchmarks with an average temperature reduction of 75.2? (34.1%) (best case is 99.8? (58.5%)), while increasing wirelength by a small amount
    • …
    corecore