2,434 research outputs found

    Functional testing of faults in asynchronous crossbar architecture

    Get PDF
    The challenge of extending Moore\u27s Law past the physical limits of the present semiconductor technology calls for novel innovations. Several novel nanotechnologies are being proposed as an alternative to their CMOS counterparts, with nanowire crossbar being one of the most promising paradigms. Quite recently, a new promising clock-free architecture, called the Asynchronous Crossbar Architecture has been proposed to enhance the manufacturability and to improve the robustness of digital circuits by removing various timing related failure modes. Even though the proposed clock-free architecture offers several merits, it is not free from the high defect rates induced due to nondeterministic nanoscale assembly. In this work, a unique Functional Test Algorithm (FTA) has been proposed and validated to test for manufacturing defects in this architecture. The proposed Functional Test Algorithm is aimed at reducing the testing overhead in terms of the time and space complexity associated with the existing sequential test scheme. In addition, it is designed to provide high fault coverage and excellent fault-tolerance via post-reconfiguration. This test scheme can be effectively used to assure true functionality of any threshold gate realized on a given PGMB. The main motivation behind this research is to propose a comprehensive test scheme which can achieve sufficiently high test coverage with acceptable test overhead. This test algorithm is a significant effort towards viable nanoscale computation --Abstract, page iv

    Nonphotolithographic nanoscale memory density prospects

    Get PDF
    Technologies are now emerging to construct molecular-scale electronic wires and switches using bottom-up self-assembly. This opens the possibility of constructing nanoscale circuits and memories where active devices are just a few nanometers square and wire pitches may be on the order of ten nanometers. The features can be defined at this scale without using photolithography. The available assembly techniques have relatively high defect rates compared to conventional lithographic integrated circuits and can only produce very regular structures. Nonetheless, with proper memory organization, it is reasonable to expect these technologies to provide memory densities in excess of 10/sup 11/ b/cm/sup 2/ with modest active power requirements under 0.6 W/Tb/s for random read operations

    Fault-tolerance thresholds for the surface code with fabrication errors

    Get PDF
    The construction of topological error correction codes requires the ability to fabricate a lattice of physical qubits embedded on a manifold with a non-trivial topology such that the quantum information is encoded in the global degrees of freedom (i.e. the topology) of the manifold. However, the manufacturing of large-scale topological devices will undoubtedly suffer from fabrication errors---permanent faulty components such as missing physical qubits or failed entangling gates---introducing permanent defects into the topology of the lattice and hence significantly reducing the distance of the code and the quality of the encoded logical qubits. In this work we investigate how fabrication errors affect the performance of topological codes, using the surface code as the testbed. A known approach to mitigate defective lattices involves the use of primitive SWAP gates in a long sequence of syndrome extraction circuits. Instead, we show that in the presence of fabrication errors the syndrome can be determined using the supercheck operator approach and the outcome of the defective gauge stabilizer generators without any additional computational overhead or the use of SWAP gates. We report numerical fault-tolerance thresholds in the presence of both qubit fabrication and gate fabrication errors using a circuit-based noise model and the minimum-weight perfect matching decoder. Our numerical analysis is most applicable to 2D chip-based technologies, but the techniques presented here can be readily extended to other topological architectures. We find that in the presence of 8% qubit fabrication errors, the surface code can still tolerate a computational error rate of up to 0.1%.Comment: 10 pages, 15 figure

    Investigation into voltage and process variation-aware manufacturing test

    No full text
    Increasing integration and complexity in IC design provides challenges for manufacturing testing. This thesis studies how process and supply voltage variation influence defect behaviour to determine the impact on manufacturing test cost and quality. The focus is on logic testing of static CMOS designs with respect to two important defect types in deep submicron CMOS: resistive bridges and full opens. The first part of the thesis addresses testing for resistive bridge defects in designs with multiple supply voltage settings. To enable analysis, a fault simulator is developed using a supply voltage-aware model for bridge defect behaviour. The analysis shows that for high defect coverage it is necessary to perform test for more than one supply voltage setting, due to supply voltage-dependent behaviour. A low-cost and effective test method is presented consisting of multi-voltage test generation that achieves high defect coverage and test set size reduction without compromise to defect coverage. Experiments on synthesised benchmarks with realistic bridge locations validate the proposed method.The second part focuses on the behaviour of full open defects under supply voltage variation. The aim is to determine the appropriate value of supply voltage to use when testing. Two models are considered for the behaviour of full open defects with and without gate tunnelling leakage influence. Analysis of the supply voltage-dependent behaviour of full open defects is performed to determine if it is required to test using more than one supply voltage to detect all full open defects. Experiments on synthesised benchmarks using an extended version of the fault simulator tool mentioned above, measure the quantitative impact of supply voltage variation on defect coverage.The final part studies the impact of process variation on the behaviour of bridge defects. Detailed analysis using synthesised ISCAS benchmarks and realistic bridge model shows that process variation leads to additional faults. If process variation is not considered in test generation, the test will fail to detect some of these faults, which leads to test escapes. A novel metric to quantify the impact of process variation on test quality is employed in the development of a new test generation tool, which achieves high bridge defect coverage. The method achieves a user-specified test quality with test sets which are smaller than test sets generated without consideration of process variation

    Fault and Defect Tolerant Computer Architectures: Reliable Computing With Unreliable Devices

    Get PDF
    This research addresses design of a reliable computer from unreliable device technologies. A system architecture is developed for a fault and defect tolerant (FDT) computer. Trade-offs between different techniques are studied and yield and hardware cost models are developed. Fault and defect tolerant designs are created for the processor and the cache memory. Simulation results for the content-addressable memory (CAM)-based cache show 90% yield with device failure probabilities of 3 x 10(-6), three orders of magnitude better than non fault tolerant caches of the same size. The entire processor achieves 70% yield with device failure probabilities exceeding 10(-6). The required hardware redundancy is approximately 15 times that of a non-fault tolerant design. While larger than current FT designs, this architecture allows the use of devices much more likely to fail than silicon CMOS. As part of model development, an improved model is derived for NAND Multiplexing. The model is the first accurate model for small and medium amounts of redundancy. Previous models are extended to account for dependence between the inputs and produce more accurate results

    Quiescent current testing of CMOS data converters

    Get PDF
    Power supply quiescent current (IDDQ) testing has been very effective in VLSI circuits designed in CMOS processes detecting physical defects such as open and shorts and bridging defects. However, in sub-micron VLSI circuits, IDDQ is masked by the increased subthreshold (leakage) current of MOSFETs affecting the efficiency of I¬DDQ testing. In this work, an attempt has been made to perform robust IDDQ testing in presence of increased leakage current by suitably modifying some of the test methods normally used in industry. Digital CMOS integrated circuits have been tested successfully using IDDQ and IDDQ methods for physical defects. However, testing of analog circuits is still a problem due to variation in design from one specific application to other. The increased leakage current further complicates not only the design but also testing. Mixed-signal integrated circuits such as the data converters are even more difficult to test because both analog and digital functions are built on the same substrate. We have re-examined both IDDQ and IDDQ methods of testing digital CMOS VLSI circuits and added features to minimize the influence of leakage current. We have designed built-in current sensors (BICS) for on-chip testing of analog and mixed-signal integrated circuits. We have also combined quiescent current testing with oscillation and transient current test techniques to map large number of manufacturing defects on a chip. In testing, we have used a simple method of injecting faults simulating manufacturing defects invented in our VLSI research group. We present design and testing of analog and mixed-signal integrated circuits with on-chip BICS such as an operational amplifier, 12-bit charge scaling architecture based digital-to-analog converter (DAC), 12-bit recycling architecture based analog-to-digital converter (ADC) and operational amplifier with floating gate inputs. The designed circuits are fabricated in 0.5 μm and 1.5 μm n-well CMOS processes and tested. Experimentally observed results of the fabricated devices are compared with simulations from SPICE using MOS level 3 and BSIM3.1 model parameters for 1.5 μm and 0.5 μm n-well CMOS technologies, respectively. We have also explored the possibility of using noise in VLSI circuits for testing defects and present the method we have developed

    Layout regularity metric as a fast indicator of process variations

    Get PDF
    Integrated circuits design faces increasing challenge as we scale down due to the increase of the effect of sensitivity to process variations. Systematic variations induced by different steps in the lithography process affect both parametric and functional yields of the designs. These variations are known, themselves, to be affected by layout topologies. Design for Manufacturability (DFM) aims at defining techniques that mitigate variations and improve yield. Layout regularity is one of the trending techniques suggested by DFM to mitigate process variations effect. There are several solutions to create regular designs, like restricted design rules and regular fabrics. These regular solutions raised the need for a regularity metric. Metrics in literature are insufficient for different reasons; either because they are qualitative or computationally intensive. Furthermore, there is no study relating either lithography or electrical variations to layout regularity. In this work, layout regularity is studied in details and a new geometrical-based layout regularity metric is derived. This metric is verified against lithographic simulations and shows good correlation. Calculation of the metric takes only few minutes on 1mm x 1mm design, which is considered fast compared to the time taken by simulations. This makes it a good candidate for pre-processing the layout data and selecting certain areas of interest for lithographic simulations for faster throughput. The layout regularity metric is also compared against a model that measures electrical variations due to systematic lithographic variations. The validity of using the regularity metric to flag circuits that have high variability using the developed electrical variations model is shown. The regularity metric results compared to the electrical variability model results show matching percentage that can reach 80%, which means that this metric can be used as a fast indicator of designs more susceptible to lithography and hence electrical variations

    Techniques for Improving Security and Trustworthiness of Integrated Circuits

    Get PDF
    The integrated circuit (IC) development process is becoming increasingly vulnerable to malicious activities because untrusted parties could be involved in this IC development flow. There are four typical problems that impact the security and trustworthiness of ICs used in military, financial, transportation, or other critical systems: (i) Malicious inclusions and alterations, known as hardware Trojans, can be inserted into a design by modifying the design during GDSII development and fabrication. Hardware Trojans in ICs may cause malfunctions, lower the reliability of ICs, leak confidential information to adversaries or even destroy the system under specifically designed conditions. (ii) The number of circuit-related counterfeiting incidents reported by component manufacturers has increased significantly over the past few years with recycled ICs contributing the largest percentage of the total reported counterfeiting incidents. Since these recycled ICs have been used in the field before, the performance and reliability of such ICs has been degraded by aging effects and harsh recycling process. (iii) Reverse engineering (RE) is process of extracting a circuit’s gate-level netlist, and/or inferring its functionality. The RE causes threats to the design because attackers can steal and pirate a design (IP piracy), identify the device technology, or facilitate other hardware attacks. (iv) Traditional tools for uniquely identifying devices are vulnerable to non-invasive or invasive physical attacks. Securing the ID/key is of utmost importance since leakage of even a single device ID/key could be exploited by an adversary to hack other devices or produce pirated devices. In this work, we have developed a series of design and test methodologies to deal with these four challenging issues and thus enhance the security, trustworthiness and reliability of ICs. The techniques proposed in this thesis include: a path delay fingerprinting technique for detection of hardware Trojans, recycled ICs, and other types counterfeit ICs including remarked, overproduced, and cloned ICs with their unique identifiers; a Built-In Self-Authentication (BISA) technique to prevent hardware Trojan insertions by untrusted fabrication facilities; an efficient and secure split manufacturing via Obfuscated Built-In Self-Authentication (OBISA) technique to prevent reverse engineering by untrusted fabrication facilities; and a novel bit selection approach for obtaining the most reliable bits for SRAM-based physical unclonable function (PUF) across environmental conditions and silicon aging effects
    corecore