339 research outputs found

    Monte Carlo investigation of optimal device architectures for SiGe FETs

    Get PDF
    Strained silicon channel FETs grown on virtual SiGe substrates show clear potential for RF applications, in a material system compatible with silicon VLSI. However, the optimisation of practical RF devices requires some care. 0.1-0.12 μm gate length designs are investigated using Monte Carlo techniques. Although structures based on III-V experience show fT values of up to 94 GHz, more realistic designs are shown to be limited by parallel conduction and ill constrained effective channel lengths. Aggressively scaled SiGe devices, following state-of-the-art CMOS technologies, show fT values of up to 80 GHz

    Can deep-sub-micron device noise be used as the basis for probabilistic neural computation?

    Get PDF
    This thesis explores the potential of probabilistic neural architectures for computation with future nanoscale Metal-Oxide-Semiconductor Field Effect Transistors (MOSFETs). In particular, the performance of a Continuous Restricted Boltzmann Machine {CRBM) implemented with generated noise of Random Telegraph Signal (RTS) and 1/ f form has been studied with reference to the 'typical' Gaussian implementation. In this study, a time domain RTS based noise analysis capability has been developed based upon future nanoscale MOSFETs, to represent the effect of nanoscale MOSFET noise on circuit implementation in particular the synaptic analogue multiplier which is subsequently used to implement stochastic behaviour of the CRBM. The result of this thesis indicates little degradation in performance from that of the typical Gaussian CRBM. Through simulation experiments, the CRBM with nanoscale MOSFET noise shows the ability to reconstruct training data, although it takes longer to converge to equilibrium. The results in this thesis do not prove that nanoscale MOSFET noise can be exploited in all contexts and with all data, for probabilistic computation. However, the result indicates, for the first time, that nanoscale MOSFET noise has the potential to be used for probabilistic neural computation hardware implementation. This thesis thus introduces a methodology for a form of technology-downstreaming and highlights the potential of probabilistic architecture for computation with future nanoscale MOSFETs

    An Energy-Efficient Reconfigurable Mobile Memory Interface for Computing Systems

    Get PDF
    The critical need for higher power efficiency and bandwidth transceiver design has significantly increased as mobile devices, such as smart phones, laptops, tablets, and ultra-portable personal digital assistants continue to be constructed using heterogeneous intellectual properties such as central processing units (CPUs), graphics processing units (GPUs), digital signal processors, dynamic random-access memories (DRAMs), sensors, and graphics/image processing units and to have enhanced graphic computing and video processing capabilities. However, the current mobile interface technologies which support CPU to memory communication (e.g. baseband-only signaling) have critical limitations, particularly super-linear energy consumption, limited bandwidth, and non-reconfigurable data access. As a consequence, there is a critical need to improve both energy efficiency and bandwidth for future mobile devices.;The primary goal of this study is to design an energy-efficient reconfigurable mobile memory interface for mobile computing systems in order to dramatically enhance the circuit and system bandwidth and power efficiency. The proposed energy efficient mobile memory interface which utilizes an advanced base-band (BB) signaling and a RF-band signaling is capable of simultaneous bi-directional communication and reconfigurable data access. It also increases power efficiency and bandwidth between mobile CPUs and memory subsystems on a single-ended shared transmission line. Moreover, due to multiple data communication on a single-ended shared transmission line, the number of transmission lines between mobile CPU and memories is considerably reduced, resulting in significant technological innovations, (e.g. more compact devices and low cost packaging to mobile communication interface) and establishing the principles and feasibility of technologies for future mobile system applications. The operation and performance of the proposed transceiver are analyzed and its circuit implementation is discussed in details. A chip prototype of the transceiver was implemented in a 65nm CMOS process technology. In the measurement, the transceiver exhibits higher aggregate data throughput and better energy efficiency compared to prior works

    Power-efficient current-mode analog circuits for highly integrated ultra low power wireless transceivers

    Get PDF
    In this thesis, current-mode low-voltage and low-power techniques have been applied to implement novel analog circuits for zero-IF receiver backend design, focusing on amplification, filtering and detection stages. The structure of the thesis follows a bottom-up scheme: basic techniques at device level for low voltage low power operation are proposed in the first place, followed by novel circuit topologies at cell level, and finally the achievement of new designs at system level. At device level the main contribution of this work is the employment of Floating-Gate (FG) and Quasi-Floating-Gate (QFG) transistors in order to reduce the power consumption. New current-mode basic topologies are proposed at cell level: current mirrors and current conveyors. Different topologies for low-power or high performance operation are shown, being these circuits the base for the system level designs. At system level, novel current-mode amplification, filtering and detection stages using the former mentioned basic cells are proposed. The presented current-mode filter makes use of companding techniques to achieve high dynamic range and very low power consumption with for a very wide tuning range. The amplification stage avoids gain bandwidth product achieving a constant bandwidth for different gain configurations using a non-linear active feedback network, which also makes possible to tune the bandwidth. Finally, the proposed current zero-crossing detector represents a very power efficient mixed signal detector for phase modulations. All these designs contribute to the design of very low power compact Zero-IF wireless receivers. The proposed circuits have been fabricated using a 0.5μm double-poly n-well CMOS technology, and the corresponding measurement results are provided and analyzed to validate their operation. On top of that, theoretical analysis has been done to fully explore the potential of the resulting circuits and systems in the scenario of low-power low-voltage applications.Programa Oficial de Doctorado en Tecnologías de las Comunicaciones (RD 1393/2007)Komunikazioen Teknologietako Doktoretza Programa Ofiziala (ED 1393/2007

    Gallium arsenide bit-serial integrated circuits

    Get PDF

    Towards Reliability- & Variability-aware Design-Technology Co-optimization in Advanced Nodes: Defect Characterization, Industry-friendly Modelling and ML-assisted Prediction

    Get PDF
    Reliability- & variability-aware Design Technology co-optimization (RV-DTCO) becomes indispensable with advanced nodes. However, four key issues hinder its practical adoption: the lack of characterization technique that offer both accuracy and efficiency, the lack of defect model with long-term prediction capability, the lack of compact model compatible with most EDA platforms, and the low efficiency in circuit-level prediction to support frequent iterations during co-optimization. Demonstrating with 7nm technology, this work tackles these issues by developing an efficient characterization method for separating defects, introducing a comprehensive test-data-verified defect-centric physical-based model & an industry-friendly OMI-based compact model, and proposing a machine learning-assisted approach to accelerate circuit-level prediction. With these achievements, a RV-DTCO flow is established and demonstrated on 3nm GAA technology to bridge the material level to the circuit level. The work paves ways in boosting adoption of RV-DTCO in both circuit design & process development for ultimate nodes. Index Terms— Design Technology co-optimization (DTCO), FinFET, reliability, variability, Discharging-based multi-pulse technique (DMP), OMI, ST-GN

    Hardware for Memristive Neuromorphic Systems with Reliable Programming and Online Learning

    Get PDF
    Alternative computing technologies are highly sought after due to limitations on transistor fabrication improvements. Fabricated memristive technology allows for a non-volatile analog memory for neuromorphic computing. In an integrated CMOS process, the synapse circuits designed for a spiking neuromorphic system can use memristors to regulate accumulation in the neuron circuits. Testing the fabricated memristive devices composed of hafnium oxide and developing a model to represent the key device characteristics lead to specific design choices in implementing the analog memory core of the synapse circuit. The circuits I designed for neuromorphic computing in this process take advantage of the unique capabilities of the memristive device to store a programmable analog memory reliably and efficiently. I designed the peripheral circuitry required including the circuits for programming the memristor and for online learning capabilities

    Simulation and Modeling of Silicon Based Emerging Nanodevices: From Device to Circuit Level

    Get PDF
    Nanostructure based devices are very promising candidates for the emerging nanotechnologies with advantage in terms of power consumption and functional density. Nanowire Field Effect Transistor (NWFET) and Single Electron Transistor (SET) are the focus of this work. The serious challenges faced by the MOSFET due to scaling limits can be solved by these devices. NWFET provides better gate control and overcomes the short channel effects. SET operates in the quantum confinement regime where the basic operation of MOSFET becomes a challenge. SET works better when the dimensions are small encouraging the process of scaling down. Because of these characteristics of the nanodevices, they have achieved a huge interest from the viewpoint of theoretical as well as applied electronics. The studies focus on the understanding of the basic transport characteristics of the devices. The necessity is to develop a model which is efficient, can be used at circuit level and also provides physical insights of the device. The first part of this work focuses on developing the model for SET and to implement it at the circuit level. The transport properties of SET are studied through quantum simulations. The behavioral characterization of the device is performed and the effect of different device parameters on the transport is studied. Furthermore, the impact of gate voltage is analyzed which modulates the current by shifting the energy levels of the device. After observing the transport through SET, a model is developed that efficiently evaluates the IV characteristics of the device. The quantum simulations are used as reference and a huge computational over-head is achieved while maintaining accuracy. Then the model is implemented in hardware descriptive language showing its functional variability at circuit level by designing some logic circuits like AND, OR and FA. In the second part, the performance of the nanoarrays based on NWFET is characterized. A device level model is developed to evaluate the gate capacitance and drain current of NWFET. Starting from the output of the model, in-house simulator is modified and used to evaluate the switching activity of the devices in nanoarray. A nanoarray implementation for bio-sequence alignment based on a systolic array is realized and its essential performance is evaluated. The power consumption, area and performance of the nanoarray implementation are compared with CMOS implementation. A wide solution space can be explored to find the optimal solution trading power and performance and considering the technological limitations of a realistic implementation

    PHY Link Design and Optimization For High-Speed Low-Power Communication Systems

    Get PDF
    The ever-growing demands for high-bandwidth data transfer have been pushing towards advancing research efforts in the field of high-performing communication systems. Studies on the performance of single chip, e.g. faster multi-core processors and higher system memory capacity, have been explored. To further enhance the system performance, researches have been focused on the improvement of data-transfer bandwidth for chip-to-chip communication in the high-speed serial link. Many solutions have been addressed to overcome the bottleneck caused by the non-idealties such as bandwidth-limited electrical channel that connects two link devices and varieties of undesired noise in the communication systems. Nevertheless, with these solutions data have run into limitations of the timing margins for high-speed interfaces running at multiple gigabits per second data rates on low-cost Printed Circuit Board (PCB) material with constrained power budget. Therefore, the challenge in designing a physical layer (PHY) link for high-speed communication systems turns out to be power-efficient, reliable and cost-effective. In this context, this dissertation is intended to focus on architectural design, system-level and circuit-level verification of a PHY link as well as system performance optimization in respective of power, reliability and adaptability in high-speed communication systems. The PHY is mainly composed of clock data recovery (CDR), equalizers (EQs) and high- speed I/O drivers. Symmetrical structure of the PHY link is usually duplicated in both link devices for bidirectional data transmission. By introducing training mechanisms into high-speed communication systems, the timing in one link device is adaptively aligned to the timing condition specified in the other link device despite of different skews or induced jitter resulting from process, voltage and temperature (PVT) variations in the individual link. With reliable timing relationships among the interface signals provided, the total system bandwidth is dramatically improved. On the other hand, interface training offers high flexibility for reuse without further investigation on high demanding components involved in high costs. In the training mode, a CDR module is essential for reconstructing the transmitted bitstream to achieve the best data eye and to detect the edges of data stream in asynchronous systems or source-synchronous systems. Generally, the CDR works as a feedback control system that aligns its output clock to the center of the received data. In systems that contain multiple data links, the overall CDR power consumption increases linearly with the increase in number of links as one CDR is required for each link. Therefore, a power-efficient CDR plays a significant role in such systems with parallel links. Furthermore, a high performance CDR requires low jitter generation in spite of high input jitter. To minimize the trade-off between power consumption and CDR jitter, a novel CDR architecture is proposed by utilizing the proportional-integral (PI) controller and three times sampling scheme. Meanwhile, signal integrity (SI) becomes critical as the data rate exceeds several gigabits per second. Distorted data due to the non-idealties in systems are likely to reduce the signal quality aggressively and result in intolerable transmission errors in worst case scenarios, thus affect the system effective bandwidth. Hence, additional trainings such as transmitter (Tx) and receiver (Rx) EQ trainings for SI purpose are inserted into the interface training. Besides, a simplified system architecture with unsymmetrical placement of adaptive Rx and Tx EQs in a single link device is proposed and analyzed by using different coefficient adaptation algorithms. This architecture enables to reduce a large number of EQs through the training, especially in case of parallel links. Meanwhile, considerable power and chip area are saved. Finally, high-speed I/O driver against PVT variations is discussed. Critical issues such as overshoot and undershoot interfering with the data are primarily accompanied by impedance mismatch between the I/O driver and its transmitting channel. By applying PVT compensation technique I/O driver impedances can be effectively calibrated close to the target value. Different digital impedance calibration algorithms against PVT variations are implemented and compared for achieving fast calibration and low power requirements
    • …
    corecore