8,334 research outputs found

    Current Mirror With Programmable Floating Gate

    Get PDF
    Systems and methods are discussed for using a floating-gate MOSFET as a programmable reference circuit. One example of the programmable reference circuit is a programmable voltage reference source, while a second example of a programmable reference circuit is a programmable reference current source. The programmable voltage reference source and/or the reference current source may be incorporated into several types of circuits, such as comparator circuits, current-mirror circuits, and converter circuits. Comparator circuits and current-mirror circuits are often incorporated into circuits such as converter circuits. Converter circuits include analog-to-digital converters and digital-to-analog converters.Georgia Tech Research Corporatio

    Analog-to-digital Converter With Programmable Floating Gate

    Get PDF
    Systems and methods are discussed for using a floating-gate MOSFET as a programmable reference circuit. One example of the programmable reference circuit is a programmable voltage reference source, while a second example of a programmable reference circuit is a programmable reference current source. The programmable voltage reference source and/or the reference current source may be incorporated into several types of circuits, such as comparator circuits, current-mirror circuits, and converter circuits. Comparator circuits and current-mirror circuits are often incorporated into circuits such as converter circuits. Converter circuits include analog-to-digital converters and digital-to-analog converters.Georgia Tech Research Corporatio

    Transformations of High-Level Synthesis Codes for High-Performance Computing

    Full text link
    Specialized hardware architectures promise a major step in performance and energy efficiency over the traditional load/store devices currently employed in large scale computing systems. The adoption of high-level synthesis (HLS) from languages such as C/C++ and OpenCL has greatly increased programmer productivity when designing for such platforms. While this has enabled a wider audience to target specialized hardware, the optimization principles known from traditional software design are no longer sufficient to implement high-performance codes. Fast and efficient codes for reconfigurable platforms are thus still challenging to design. To alleviate this, we present a set of optimizing transformations for HLS, targeting scalable and efficient architectures for high-performance computing (HPC) applications. Our work provides a toolbox for developers, where we systematically identify classes of transformations, the characteristics of their effect on the HLS code and the resulting hardware (e.g., increases data reuse or resource consumption), and the objectives that each transformation can target (e.g., resolve interface contention, or increase parallelism). We show how these can be used to efficiently exploit pipelining, on-chip distributed fast memory, and on-chip streaming dataflow, allowing for massively parallel architectures. To quantify the effect of our transformations, we use them to optimize a set of throughput-oriented FPGA kernels, demonstrating that our enhancements are sufficient to scale up parallelism within the hardware constraints. With the transformations covered, we hope to establish a common framework for performance engineers, compiler developers, and hardware developers, to tap into the performance potential offered by specialized hardware architectures using HLS

    A Parallel Programmer for Non-Volatile Analog Memory Arrays

    Get PDF
    Since their introduction in 1967, floating-gate transistors have enjoyed widespread success as non-volatile digital memory elements in EEPROM and flash memory. In recent decades, however, a renewed interest in floating-gate transistors has focused on their viability as non-volatile analog memory, as well as programmable voltage and current sources. They have been used extensively in this capacity to solve traditional problems associated with analog circuit design, such as to correct for fabrication mismatch, to reduce comparator offset, and for amplifier auto-zeroing. They have also been used to implement adaptive circuits, learning systems, and reconfigurable systems. Despite these applications, their proliferation has been limited by complex programming procedures, which typically require high-precision test equipment and intimate knowledge of the programmer circuit to perform.;This work strives to alleviate this limitation by presenting an improved method for fast and accurate programming of floating-gate transistors. This novel programming circuit uses a digital-to-analog converter and an array of sample-and-hold circuits to facilitate fast parallel programming of floating-gate memory arrays and eliminate the need for high accuracy voltage sources. Additionally, this circuit employs a serial peripheral interface which digitizes control of the programmer, simplifying the programming procedure and enabling the implementation of software applications that obscure programming complexity from the end user. The efficient and simple parallel programming system was fabricated in a 0.5?m standard CMOS process and will be used to demonstrate the effectiveness of this new method

    Increasing Flash Memory Lifetime by Dynamic Voltage Allocation for Constant Mutual Information

    Full text link
    The read channel in Flash memory systems degrades over time because the Fowler-Nordheim tunneling used to apply charge to the floating gate eventually compromises the integrity of the cell because of tunnel oxide degradation. While degradation is commonly measured in the number of program/erase cycles experienced by a cell, the degradation is proportional to the number of electrons forced into the floating gate and later released by the erasing process. By managing the amount of charge written to the floating gate to maintain a constant read-channel mutual information, Flash lifetime can be extended. This paper proposes an overall system approach based on information theory to extend the lifetime of a flash memory device. Using the instantaneous storage capacity of a noisy flash memory channel, our approach allocates the read voltage of flash cell dynamically as it wears out gradually over time. A practical estimation of the instantaneous capacity is also proposed based on soft information via multiple reads of the memory cells.Comment: 5 pages. 5 figure

    Developing large-scale field-programmable analog arrays for rapid prototyping

    Get PDF
    Field-programmable analog arrays (FPAAs) provide a method for rapidly prototyping analog systems. While currently available FPAAs vary in architecture and interconnect design, they are often limited in size and flexibility. For FPAAs to be as useful and marketable as modern digital reconfigurable devices, new technologies must be explored to provide area efficient, accurately programmable analog circuitry that can be easily integrated into a larger digital/mixed signal system. By leveraging recent advances in floating gate transistors, a new generation of FPAAs are achievable that will dramatically advance the current state of the art in terms of size, functionality, and flexibility

    Large scale reconfigurable analog system design enabled through floating-gate transistors

    Get PDF
    This work is concerned with the implementation and implication of non-volatile charge storage on VLSI system design. To that end, the floating-gate pFET (fg-pFET) is considered in the context of large-scale arrays. The programming of the element in an efficient and predictable way is essential to the implementation of these systems, and is thus explored. The overhead of the control circuitry for the fg-pFET, a key scalability issue, is examined. A light-weight, trend-accurate model is absolutely necessary for VLSI system design and simulation, and is also provided. Finally, several reconfigurable and reprogrammable systems that were built are discussed.Ph.D.Committee Chair: Hasler, Paul E.; Committee Member: Anderson, David V.; Committee Member: Ayazi, Farrokh; Committee Member: Degertekin, F. Levent; Committee Member: Hunt, William D

    Array-based architecture for FET-based, nanoscale electronics

    Get PDF
    Advances in our basic scientific understanding at the molecular and atomic level place us on the verge of engineering designer structures with key features at the single nanometer scale. This offers us the opportunity to design computing systems at what may be the ultimate limits on device size. At this scale, we are faced with new challenges and a new cost structure which motivates different computing architectures than we found efficient and appropriate in conventional very large scale integration (VLSI). We sketch a basic architecture for nanoscale electronics based on carbon nanotubes, silicon nanowires, and nano-scale FETs. This architecture can provide universal logic functionality with all logic and signal restoration operating at the nanoscale. The key properties of this architecture are its minimalism, defect tolerance, and compatibility with emerging bottom-up nanoscale fabrication techniques. The architecture further supports micro-to-nanoscale interfacing for communication with conventional integrated circuits and bootstrap loading
    corecore