5,568 research outputs found

    Fault-tolerance techniques for hybrid CMOS/nanoarchitecture

    Get PDF
    The authors propose two fault-tolerance techniques for hybrid CMOS/nanoarchitecture implementing logic functions as look-up tables. The authors compare the efficiency of the proposed techniques with recently reported methods that use single coding schemes in tolerating high fault rates in nanoscale fabrics. Both proposed techniques are based on error correcting codes to tackle different fault rates. In the first technique, the authors implement a combined two-dimensional coding scheme using Hamming and Bose-Chaudhuri-Hocquenghem (BCH) codes to address fault rates greater than 5. In the second technique, Hamming coding is complemented with bad line exclusion technique to tolerate fault rates higher than the first proposed technique (up to 20). The authors have also estimated the improvement that can be achieved in the circuit reliability in the presence of Don-t Care Conditions. The area, latency and energy costs of the proposed techniques were also estimated in the CMOS domain

    Fault Secure Encoder and Decoder for NanoMemory Applications

    Get PDF
    Memory cells have been protected from soft errors for more than a decade; due to the increase in soft error rate in logic circuits, the encoder and decoder circuitry around the memory blocks have become susceptible to soft errors as well and must also be protected. We introduce a new approach to design fault-secure encoder and decoder circuitry for memory designs. The key novel contribution of this paper is identifying and defining a new class of error-correcting codes whose redundancy makes the design of fault-secure detectors (FSD) particularly simple. We further quantify the importance of protecting encoder and decoder circuitry against transient errors, illustrating a scenario where the system failure rate (FIT) is dominated by the failure rate of the encoder and decoder. We prove that Euclidean geometry low-density parity-check (EG-LDPC) codes have the fault-secure detector capability. Using some of the smaller EG-LDPC codes, we can tolerate bit or nanowire defect rates of 10% and fault rates of 10^(-18) upsets/device/cycle, achieving a FIT rate at or below one for the entire memory system and a memory density of 10^(11) bit/cm^2 with nanowire pitch of 10 nm for memory blocks of 10 Mb or larger. Larger EG-LDPC codes can achieve even higher reliability and lower area overhead

    Tagged repair techniques for defect tolerance in hybrid nano/CMOS architecture

    No full text
    We propose two new repair techniques for hybrid nano/CMOS computing architecture with lookup table based Boolean logic. Our proposed techniques use tagging mechanism to provide high level of defect tolerance and we present theoretical equations to predict the repair capability including an estimate of the repair cost. The repair techniques are efficient in utilization of spare units and capable of targeting upto 20% defect rates, which is higher than recently reported repair techniques

    CMOL: Second Life for Silicon?

    Get PDF
    This report is a brief review of the recent work on architectures for the prospective hybrid CMOS/nanowire/ nanodevice ("CMOL") circuits including digital memories, reconfigurable Boolean-logic circuits, and mixed-signal neuromorphic networks. The basic idea of CMOL circuits is to combine the advantages of CMOS technology (including its flexibility and high fabrication yield) with the extremely high potential density of molecular-scale two-terminal nanodevices. Relatively large critical dimensions of CMOS components and the "bottom-up" approach to nanodevice fabrication may keep CMOL fabrication costs at affordable level. At the same time, the density of active devices in CMOL circuits may be as high as 1012 cm2 and that they may provide an unparalleled information processing performance, up to 1020 operations per cm2 per second, at manageable power consumption.Comment: Submitted on behalf of TIMA Editions (http://irevues.inist.fr/tima-editions

    Spiers Memorial Lecture: Molecular mechanics and molecular electronics

    Get PDF
    We describe our research into building integrated molecular electronics circuitry for a diverse set of functions, and with a focus on the fundamental scientific issues that surround this project. In particular, we discuss experiments aimed at understanding the function of bistable [2]rotaxane molecular electronic switches by correlating the switching kinetics and ground state thermodynamic properties of those switches in various environments, ranging from the solution phase to a Langmuir monolayer of the switching molecules sandwiched between two electrodes. We discuss various devices, low bit-density memory circuits, and ultra-high density memory circuits that utilize the electrochemical switching characteristics of these molecules in conjunction with novel patterning methods. We also discuss interconnect schemes that are capable of bridging the micrometre to submicrometre length scales of conventional patterning approaches to the near-molecular length scales of the ultra-dense memory circuits. Finally, we discuss some of the challenges associated with fabricated ultra-dense molecular electronic integrated circuits

    Analysis of integrated single-electron memory operation

    Full text link
    Various aspects of single-electron memory are discussed. In particular, we analyze the single-electron charging by Fowler-Nordheim tunneling, propose the idea of background charge compensation, and discuss the defect-tolerant architecture based on nanofuses.Comment: 6 page

    Layered architecture for quantum computing

    Full text link
    We develop a layered quantum computer architecture, which is a systematic framework for tackling the individual challenges of developing a quantum computer while constructing a cohesive device design. We discuss many of the prominent techniques for implementing circuit-model quantum computing and introduce several new methods, with an emphasis on employing surface code quantum error correction. In doing so, we propose a new quantum computer architecture based on optical control of quantum dots. The timescales of physical hardware operations and logical, error-corrected quantum gates differ by several orders of magnitude. By dividing functionality into layers, we can design and analyze subsystems independently, demonstrating the value of our layered architectural approach. Using this concrete hardware platform, we provide resource analysis for executing fault-tolerant quantum algorithms for integer factoring and quantum simulation, finding that the quantum dot architecture we study could solve such problems on the timescale of days.Comment: 27 pages, 20 figure

    Optimized Surface Code Communication in Superconducting Quantum Computers

    Full text link
    Quantum computing (QC) is at the cusp of a revolution. Machines with 100 quantum bits (qubits) are anticipated to be operational by 2020 [googlemachine,gambetta2015building], and several-hundred-qubit machines are around the corner. Machines of this scale have the capacity to demonstrate quantum supremacy, the tipping point where QC is faster than the fastest classical alternative for a particular problem. Because error correction techniques will be central to QC and will be the most expensive component of quantum computation, choosing the lowest-overhead error correction scheme is critical to overall QC success. This paper evaluates two established quantum error correction codes---planar and double-defect surface codes---using a set of compilation, scheduling and network simulation tools. In considering scalable methods for optimizing both codes, we do so in the context of a full microarchitectural and compiler analysis. Contrary to previous predictions, we find that the simpler planar codes are sometimes more favorable for implementation on superconducting quantum computers, especially under conditions of high communication congestion.Comment: 14 pages, 9 figures, The 50th Annual IEEE/ACM International Symposium on Microarchitectur

    ToPoliNano: Nanoarchitectures Design Made Real

    Get PDF
    Many facts about emerging nanotechnologies are yet to be assessed. There are still major concerns, for instance, about maximum achievable device density, or about which architecture is best fit for a specific application. Growing complexity requires taking into account many aspects of technology, application and architecture at the same time. Researchers face problems that are not new per se, but are now subject to very different constraints, that need to be captured by design tools. Among the emerging nanotechnologies, two-dimensional nanowire based arrays represent promising nanostructures, especially for massively parallel computing architectures. Few attempts have been done, aimed at giving the possibility to explore architectural solutions, deriving information from extensive and reliable nanoarray characterization. Moreover, in the nanotechnology arena there is still not a clear winner, so it is important to be able to target different technologies, not to miss the next big thing. We present a tool, ToPoliNano, that enables such a multi-technological characterization in terms of logic behavior, power and timing performance, area and layout constraints, on the basis of specific technological and topological descriptions. This tool can aid the design process, beside providing a comprehensive simulation framework for DC and timing simulations, and detailed power analysis. Design and simulation results will be shown for nanoarray-based circuits. ToPoliNano is the first real design tool that tackles the top down design of a circuit based on emerging technologie

    Nonphotolithographic nanoscale memory density prospects

    Get PDF
    Technologies are now emerging to construct molecular-scale electronic wires and switches using bottom-up self-assembly. This opens the possibility of constructing nanoscale circuits and memories where active devices are just a few nanometers square and wire pitches may be on the order of ten nanometers. The features can be defined at this scale without using photolithography. The available assembly techniques have relatively high defect rates compared to conventional lithographic integrated circuits and can only produce very regular structures. Nonetheless, with proper memory organization, it is reasonable to expect these technologies to provide memory densities in excess of 10/sup 11/ b/cm/sup 2/ with modest active power requirements under 0.6 W/Tb/s for random read operations
    corecore