2,788 research outputs found

    Delay Measurements and Self Characterisation on FPGAs

    No full text
    This thesis examines new timing measurement methods for self delay characterisation of Field-Programmable Gate Arrays (FPGAs) components and delay measurement of complex circuits on FPGAs. Two novel measurement techniques based on analysis of a circuit's output failure rate and transition probability is proposed for accurate, precise and efficient measurement of propagation delays. The transition probability based method is especially attractive, since it requires no modifications in the circuit-under-test and requires little hardware resources, making it an ideal method for physical delay analysis of FPGA circuits. The relentless advancements in process technology has led to smaller and denser transistors in integrated circuits. While FPGA users benefit from this in terms of increased hardware resources for more complex designs, the actual productivity with FPGA in terms of timing performance (operating frequency, latency and throughput) has lagged behind the potential improvements from the improved technology due to delay variability in FPGA components and the inaccuracy of timing models used in FPGA timing analysis. The ability to measure delay of any arbitrary circuit on FPGA offers many opportunities for on-chip characterisation and physical timing analysis, allowing delay variability to be accurately tracked and variation-aware optimisations to be developed, reducing the productivity gap observed in today's FPGA designs. The measurement techniques are developed into complete self measurement and characterisation platforms in this thesis, demonstrating their practical uses in actual FPGA hardware for cross-chip delay characterisation and accurate delay measurement of both complex combinatorial and sequential circuits, further reinforcing their positions in solving the delay variability problem in FPGAs

    Redundancy Optimization for Clock-Free Nanowire Crossbar Architecture

    Get PDF
    In this paper a method is being proposed to find the optimal dimension of Programmable Gate Macro Block (PGMB) in clock-free nanowire crossbar architecture. A PGMB is a nanowire crossbar matrix with discrete number of rows and columns on which the NCL (Null Convention Logic) gates can be programmed. This method uses inherent redundancy to route through defective crosspoints. A 6 X 10 defect-free crossbar can be used to program any of the 27 threshold gates. Due to imperfections and variations in nanoscale manufacturing process, high defect densities are anticipated. Thus, such defects should be located when tested and the logic has to be rerouted around them to maintain proper functionality. This paper discusses this problem and tried to find an optimal solution through simulations. In the final submission, more effective logic mapping techniques will be proposed and validated

    Analyzing big time series data in solar engineering using features and PCA

    Get PDF
    In solar engineering, we encounter big time series data such as the satellite-derived irradiance data and string-level measurements from a utility-scale photovoltaic (PV) system. While storing and hosting big data are certainly possible using today’s data storage technology, it is challenging to effectively and efficiently visualize and analyze the data. We consider a data analytics algorithm to mitigate some of these challenges in this work. The algorithm computes a set of generic and/or application-specific features to characterize the time series, and subsequently uses principal component analysis to project these features onto a two-dimensional space. As each time series can be represented by features, it can be treated as a single data point in the feature space, allowing many operations to become more amenable. Three applications are discussed within the overall framework, namely (1) the PV system type identification, (2) monitoring network design, and (3) anomalous string detection. The proposed framework can be easily translated to many other solar engineer applications

    Experimental Bayesian Quantum Phase Estimation on a Silicon Photonic Chip

    Get PDF
    Quantum phase estimation is a fundamental subroutine in many quantum algorithms, including Shor's factorization algorithm and quantum simulation. However, so far results have cast doubt on its practicability for near-term, non-fault tolerant, quantum devices. Here we report experimental results demonstrating that this intuition need not be true. We implement a recently proposed adaptive Bayesian approach to quantum phase estimation and use it to simulate molecular energies on a Silicon quantum photonic device. The approach is verified to be well suited for pre-threshold quantum processors by investigating its superior robustness to noise and decoherence compared to the iterative phase estimation algorithm. This shows a promising route to unlock the power of quantum phase estimation much sooner than previously believed

    AI/ML Algorithms and Applications in VLSI Design and Technology

    Full text link
    An evident challenge ahead for the integrated circuit (IC) industry in the nanometer regime is the investigation and development of methods that can reduce the design complexity ensuing from growing process variations and curtail the turnaround time of chip manufacturing. Conventional methodologies employed for such tasks are largely manual; thus, time-consuming and resource-intensive. In contrast, the unique learning strategies of artificial intelligence (AI) provide numerous exciting automated approaches for handling complex and data-intensive tasks in very-large-scale integration (VLSI) design and testing. Employing AI and machine learning (ML) algorithms in VLSI design and manufacturing reduces the time and effort for understanding and processing the data within and across different abstraction levels via automated learning algorithms. It, in turn, improves the IC yield and reduces the manufacturing turnaround time. This paper thoroughly reviews the AI/ML automated approaches introduced in the past towards VLSI design and manufacturing. Moreover, we discuss the scope of AI/ML applications in the future at various abstraction levels to revolutionize the field of VLSI design, aiming for high-speed, highly intelligent, and efficient implementations

    Yield and Reliability Analysis for Nanoelectronics

    Get PDF
    As technology has continued to advance and more break-through emerge, semiconductor devices with dimensions in nanometers have entered into all spheres of our lives. Accordingly, high reliability and high yield are very much a central concern to guarantee the advancement and utilization of nanoelectronic products. However, there appear to be some major challenges related to nanoelectronics in regard to the field of reliability: identification of the failure mechanisms, enhancement of the low yields of nano products, and management of the scarcity and secrecy of available data [34]. Therefore, this dissertation investigates four issues related to the yield and reliability of nanoelectronics. Yield and reliability of nanoelectronics are affected by defects generated in the manufacturing processes. An automatic method using model-based clustering has been developed to detect the defect clusters and identify their patterns where the distribution of the clustered defects is modeled by a new mixture distribution of multivariate normal distributions and principal curves. The new mixture model is capable of modeling defect clusters with amorphous, curvilinear, and linear patterns. We evaluate the proposed method using both simulated and experimental data and promising results have been obtained. Yield is one of the most important performance indexes for measuring the success of nano fabrication and manufacturing. Accurate yield estimation and prediction is essential for evaluating productivity and estimating production cost. This research studies advanced yield modeling approaches which consider the spatial variations of defects or defect counts. Results from real wafer map data show that the new yield models provide significant improvement in yield estimation compared to the traditional Poisson model and negative binomial model. The ultra-thin SiO2 is a major factor limiting the scaling of semiconductor devices. High-k gate dielectric materials such as HfO2 will replace SiO2 in future generations of MOS devices. This study investigates the two-step breakdown mechanisms and breakdown sequences of double-layered high-k gate stacks by monitoring the relaxation of the dielectric films. The hazard rate is a widely used metric for measuring the reliability of electronic products. This dissertation studies the hazard rate function of gate dielectrics breakdown. A physically feasible failure time distribution is used to model the time-to-breakdown data and a Bayesian approach is adopted in the statistical analysis

    Efficient critical area extraction for photolithographically defined patterns on ICs

    Get PDF

    Inherited Redundancy and Configurability Utilization for Repairing Nanowire Crossbars with Clustered Defects

    Get PDF
    With the recent development of nanoscale materials and assembly techniques, it is envisioned to build high-density reconfigurable systems which have never been achieved by the photolithography. Various reconfigurable architectures have been proposed based on nanowire crossbar structure as the primitive building block. Unfortunately, high-density systems consisting of nanometer-scale elements are likely to have many imperfections and variations; thus, defect-tolerance is considered as one of the most exigent challenges. In this paper, we evaluate three different logic mapping algorithms with defect avoidance to circumvent clustered defective crosspoints in nanowire reconfigurable crossbar architectures. The effectiveness of inherited redundancy and configurability utilization is demonstrated through extensive parametric simulations

    A survey of carbon nanotube interconnects for energy efficient integrated circuits

    Get PDF
    This article is a review of the state-of-art carbon nanotube interconnects for Silicon application with respect to the recent literature. Amongst all the research on carbon nanotube interconnects, those discussed here cover 1) challenges with current copper interconnects, 2) process & growth of carbon nanotube interconnects compatible with back-end-of-line integration, and 3) modeling and simulation for circuit-level benchmarking and performance prediction. The focus is on the evolution of carbon nanotube interconnects from the process, theoretical modeling, and experimental characterization to on-chip interconnect applications. We provide an overview of the current advancements on carbon nanotube interconnects and also regarding the prospects for designing energy efficient integrated circuits. Each selected category is presented in an accessible manner aiming to serve as a survey and informative cornerstone on carbon nanotube interconnects relevant to students and scientists belonging to a range of fields from physics, processing to circuit design
    corecore