528,483 research outputs found

    Low-Voltage Analog Circuit Design Using the Adaptively Biased Body-Driven Circuit Technique

    Get PDF
    The scaling of MOSFET dimensions and power supply voltage, in conjunction with an increase in system- and circuit-level performance requirements, are the most important factors driving the development of new technologies and design techniques for analog and mixed-signal integrated circuits. Though scaling has been a fact of life for analog circuit designers for many years, the approaching 1-V and sub-1-V power supplies, combined with applications that have increasingly divergent technology requirements, means that the analog and mixed-signal IC designs of the future will probably look quite different from those of the past. Foremost among the challenges that analog designers will face in highly scaled technologies are low power supply voltages, which limit dynamic range and even circuit functionality, and ultra-thin gate oxides, which give rise to significant levels of gate leakage current. The goal of this research is to develop novel analog design techniques which are commensurate with the challenges that designers will face in highly scaled CMOS technologies. To that end, a new and unique body-driven design technique called adaptive gate biasing has been developed. Adaptive gate biasing is a method for guaranteeing that MOSFETs in a body-driven simple current mirror, cascode current mirror, or regulated cascode current source are biased in saturation—independent of operating region, temperature, or supply voltage—and is an enabling technology for high-performance, low-voltage analog circuits. To prove the usefulness of the new design technique, a body-driven operational amplifier that heavily leverages adaptive gate biasing has been developed. Fabricated on a 3.3-V/0.35-μm partially depleted silicon-onv-insulator (PD-SOI) CMOS process, which has nMOS and pMOS threshold voltages of 0.65 V and 0.85 V, respectively, the body-driven amplifier displayed an open-loop gain of 88 dB, bandwidth of 9 MHz, and PSRR greater than 50 dB at 1-V power supply

    Low Voltage Regulator Modules and Single Stage Front-end Converters

    Get PDF
    Evolution in microprocessor technology poses new challenges for supplying power to these devices. To meet demands for faster and more efficient data processing, modem microprocessors are being designed with lower voltage implementations. More devices will be packed on a single processor chip and the processors will operate at higher frequencies, exceeding 1GHz. New high-performance microprocessors may require from 40 to 80 watts of power for the CPU alone. Load current must be supplied with up to 30A/µs slew rate while keeping the output voltage within tight regulation and response time tolerances. Therefore, special power supplies and Voltage Regulator Modules (VRMs) are needed to provide lower voltage with higher current and fast response. In the part one (chapter 2,3,4) of this dissertation, several low-voltage high-current VRM technologies are proposed for future generation microprocessors and ICs. The developed VRMs with these new technologies have advantages over conventional ones in terms of efficiency, transient response and cost. In most cases, the VRMs draw currents from DC bus for which front-end converters are used as a DC source. As the use of AC/DC frond-end converters continues to increase, more distorted mains current is drawn from the line, resulting in lower power factor and high total harmonic distortion. As a branch of active Power factor correction (PFC) techniques, the single-stage technique receives particular attention because of its low cost implementation. Moreover, with continuously demands for even higher power density, switching mode power supply operating at high-frequency is required because at high switching frequency, the size and weight of circuit components can be remarkably reduced. To boost the switching frequency, the soft-switching technique was introduced to alleviate the switching losses. The part two (chapter 5,6) of the dissertation presents several topologies for this front-end application. The design considerations, simulation results and experimental verification are discussed

    Radar Range Sidelobe Reduction Using Adaptive Pulse Compression Technique

    Get PDF
    Pulse compression has been widely used in radars so that low-power, long RF pulses can be transmitted, rather than a highpower short pulse. Pulse compression radars offer a number of advantages over high-power short pulsed radars, such as no need of high-power RF circuitry, no need of high-voltage electronics, compact size and light weight, better range resolution, and better reliability. However, range sidelobe associated with pulse compression has prevented the use of this technique on spaceborne radars since surface returns detected by range sidelobes may mask the returns from a nearby weak cloud or precipitation particles. Research on adaptive pulse compression was carried out utilizing a field-programmable gate array (FPGA) waveform generation board and a radar transceiver simulator. The results have shown significant improvements in pulse compression sidelobe performance. Microwave and millimeter-wave radars present many technological challenges for Earth and planetary science applications. The traditional tube-based radars use high-voltage power supply/modulators and high-power RF transmitters; therefore, these radars usually have large size, heavy weight, and reliability issues for space and airborne platforms. Pulse compression technology has provided a path toward meeting many of these radar challenges. Recent advances in digital waveform generation, digital receivers, and solid-state power amplifiers have opened a new era for applying pulse compression to the development of compact and high-performance airborne and spaceborne remote sensing radars. The primary objective of this innovative effort is to develop and test a new pulse compression technique to achieve ultrarange sidelobes so that this technique can be applied to spaceborne, airborne, and ground-based remote sensing radars to meet future science requirements. By using digital waveform generation, digital receiver, and solid-state power amplifier technologies, this improved pulse compression technique could bring significant impact on future radar development. The novel feature of this innovation is the non-linear FM (NLFM) waveform design. The traditional linear FM has the limit (-20 log BT -3 dB) for achieving ultra-low-range sidelobe in pulse compression. For this study, a different combination of 20- or 40-microsecond chirp pulse width and 2- or 4-MHz chirp bandwidth was used. These are typical operational parameters for airborne or spaceborne weather radars. The NLFM waveform design was then implemented on a FPGA board to generate a real chirp signal, which was then sent to the radar transceiver simulator. The final results have shown significant improvement on sidelobe performance compared to that obtained using a traditional linear FM chirp

    Potentials and challenges of the fuel cell technology for ship applications. A comprehensive techno-economic and environmental assessment of maritime power system configurations

    Get PDF
    The decarbonization of the global ship traffic is one of the industry’s greatest challenges for the next decades and will likely only be achieved with new, energy-efficient power technologies. To evaluate the performances of such technologies, a system modeling and optimization approach is introduced and tested, covering three elementary topics: shipboard solid oxide fuel cells (SOFCs), the benefits of decentralizing ship power systems, and the assessment of potential future power technologies and synthetic fuels. In the following, the analyses’ motivations, scopes, and derived conclusions are presented. SOFCs are a much-discussed technology with promising efficiency, fuel versatility, and few operating emissions. However, complex processes and high temperature levels inhibit their stand-alone dynamic operation. Therefore, the operability in a hybrid system is investigated, focusing on component configurations and evaluation approach corrections. It is demonstrated that moderate storage support satisfies the requirements for an uninterrupted ship operation. Depending on the load characteristics, energy-intensive and power-intensive storage applications with diverging challenges are identified. The analysis also emphasizes to treat degradation modeling with particular care, since technically optimal and cost-optimal design solutions differ meaningfully when assessing annual expenses. Decentralizing a power system with modular components in accordance with the load demand reduces both grid size and transmission losses, leading to a decrease of investment and operating costs. A cruise-ship-based case study considering variable installation locations and potential component failures is used to quantify these benefits. Transmission costs in a distributed system are reduced meaningfully with and without component failure consideration when compared to a central configuration. Also, minor modifications ensure the component redundancy requirements, resulting in comparably marginal extra expenses. Nowadays, numerous synthetic fuels are seen as candidates for future ship applications in combination with either combustion engines or fuel cells. To drive an ongoing technology discussion, performance indicators for envisioned system configurations are assessed in dependence on mission characteristics and critical price trends. Even if gaseous hydrogen is often considered not suitable for ship applications due to its low volumetric energy density, resulting little operating costs are accountable for its superior performance on short passages. For extended missions, fuel cells operating on methanol or ammonia surpass hydrogen economically

    Voltage controlled oscillator for mm-wave radio systems

    Get PDF
    Abstract. The advancement in silicon technology has accelerated the development of integrated millimeter-wave transceiver systems operating up to 100 GHz with sophisticated functionality at a reduced consumer cost. Due to the progress in the field of signal processing, frequency modulated continuous wave (FMCW) radar has become common in recent years. A high-performance local oscillator (LO) is required to generate reference signals utilized in these millimeter-wave radar transceivers. To accomplish this, novel design techniques in fundamental voltage controlled oscillators (VCO) are necessary to achieve low phase noise, wide frequency tuning range, and good power efficiency. Although integrated VCOs have been studied for decades, as we move higher in the radio frequency spectrum, there are new trade-offs in the performance parameters that require further characterization. The work described in this thesis aims to design a fully integrated fundamental VCO targeting to 150 GHz, i.e., D-Band. The purpose is to observe and analyze the design limitations at these high frequencies and their corresponding trade-offs during the design procedure. The topology selected for this study is the cross-coupled LC tank VCO. For the study, two design topologies were considered: a conventional cross-coupled LC tank VCO and an inductive divider cross-coupled LC tank VCO. The conventional LC tank VCO yields better performance in terms of phase noise and tuning range. It is observed that the VCO is highly sensitive to parasitic contributions by the transistors, and the layout interconnects, thus limiting the targeted frequency range. The dimensions of the LC tank and the transistors are selected carefully. Moreover, the VCO performance is limited by the low Q factor of the LC tank governed by the varactor that is degrading the phase noise performance and the tuning range, respectively. The output buffer loaded capacitance and the core power consumption of the VCO are optimized. The layout is drawn carefully with strategies to minimize the parasitic effects. Considering all the design challenges, a 126 GHz VCO with a tuning range of 3.9% is designed. It achieves FOMT (Figure-of-merit) of -172 dBc/Hz, and phase noise of -99.14 dBc/Hz at 10 MHz offset, Core power consumption is 8.9 mW from a 1.2 V supply. Just falling short of the targeted frequency, the design is suitable for FMCW radar applications for future technologies. The design was done using Silicon-on-Insulator (SOI) CMOS technology

    Cross-Layer Design for Multi-Antenna Ultra-Wideband Systems

    Get PDF
    Ultra-wideband (UWB) is an emerging technology that offers great promises to satisfy the growing demand for low cost and high-speed digital wireless home networks. The enormous bandwidth available, the potential for high data rates, as well as the potential for small size and low processing power long with low implementation cost, all present a unique opportunity for UWB to become a widely adopted radio solution for future wireless home-networking technology. Nevertheless, in order for UWB devices to coexist with other existing wireless technology, the transmitted power level of UWB is strictly limited by the FCC spectral mask. Such limitation poses significant design challenges to any UWB system. This thesis introduces various means to cope with these design challenges. Advanced technologies including multiple-input multiple-output (MIMO) coding, cooperative communications, and cross-layer design are employed to enhance the performance and coverage range of UWB systems. First a MIMO-coding framework for multi-antenna UWB communication systems is developed. By a technique of band hopping in combination with jointly coding across spatial, temporal, and frequency domains, the proposed scheme is able to exploit all the available spatial and frequency diversity, richly inherent in UWB channels. Then, the UWB performance in realistic UWB channel environments is characterized. The proposed performance analysis successfully captures the unique multipath-rich property and random-clustering phenomenon of UWB channels. Next, a cross-layer channel allocation scheme for UWB multiband OFDM systems is proposed. The proposed scheme optimally allocates subbands, transmitted power, and data rates among users by taking into consideration the performance requirement, the power limitation, as well as the band hopping for users with different data rates. Also, an employment of cooperative communications in UWB systems is proposed to enhance the UWB performance and coverage by exploiting the broadcasting nature of wireless channels and the cooperation among UWB devices. Furthermore, an OFDM cooperative protocol is developed and then applied to enhance the performance of UWB systems. The proposed cooperative protocol not only achieves full diversity but also efficiently utilizes the available bandwidth

    Energy efficient core designs for upcoming process technologies

    Get PDF
    Energy efficiency has been a first order constraint in the design of micro processors for the last decade. As Moore's law sunsets, new technologies are being actively explored to extend the march in increasing the computational power and efficiency. It is essential for computer architects to understand the opportunities and challenges in utilizing the upcoming process technology trends in order to design the most efficient processors. In this work, we consider three process technology trends and propose core designs that are best suited for each of the technologies. The process technologies are expected to be viable over a span of timelines. We first consider the most popular method currently available to improve the energy efficiency, i.e. by lowering the operating voltage. We make key observations regarding the limiting factors in scaling down the operating voltage for general purpose high performance processors. Later, we propose our novel core design, ScalCore, one that can work in high performance mode at nominal Vdd, and in a very energy-efficient mode at low Vdd. The resulting core design can operate at much lower voltages providing higher parallel performance while consuming lower energy. While lowering Vdd improves the energy efficiency, CMOS devices are fundamentally limited in their low voltage operation. Therefore, we next consider an upcoming device technology -- Tunneling Field-Effect Transistors (TFETs), that is expected to supplement CMOS device technology in the near future. TFETs can attain much higher energy efficiency than CMOS at low voltages. However, their performance saturates at high voltages and, therefore, cannot entirely replace CMOS when high performance is needed. Ideally, we desire a core that is as energy-efficient as TFET and provides as much performance as CMOS. To reach this goal, we characterize the TFET device behavior for core design and judiciously integrate TFET units, CMOS units in a single core. The resulting core, called HetCore, can provide very high energy efficiency while limiting the slowdown when compared to a CMOS core. Finally, we analyze Monolithic 3D (M3D) integration technology that is widely considered to be the only way to integrate more transistors on a chip. We present the first analysis of the architectural implications of using M3D for core design and show how to partition the core across different layers. We also address one of the key challenges in realizing the technology, namely, the top layer performance degradation. We propose a critical path based partitioning for logic stages and asymmetric bit/port partitioning for storage stages. The result is a core that performs nearly as well as a core without any top layer slowdown. When compared to a 2D baseline design, an M3D core not only provides much higher performance, it also reduces the energy consumption at the same time. In summary, this thesis addresses one of the fundamental challenges in computer architecture -- overcoming the fact that CMOS is not scaling anymore. As we increase the computing power on a single chip, our ability to power the entire chip keeps decreasing. This thesis proposes three solutions aimed at solving this problem over different timelines. Across all our solutions, we improve energy efficiency without compromising the performance of the core. As a result, we are able to operate twice as many cores with in the same power budget as regular cores, significantly alleviating the problem of dark silicon

    Challenges and Approaches in Green Data Center

    Get PDF
    Cloud computing is a fast evolving area of information and communication technologies (ICTs)that hascreated new environmental issues. Cloud computing technologies have a widerange ofapplications due to theirscalability, dependability, and trustworthiness, as well as their abilityto deliver high performance at a low cost.The cloud computing revolution is altering modern networking, offering both economic and technologicalbenefits as well as potential environmental benefits. These innovations have the potential to improve energyefficiency while simultaneously reducing carbon emissions and e-waste. These traits have thepotential tomakecloud computing more environmentally friendly. Green cloud computing is the science and practise of properlydesigning, manufacturing, using, and disposing of computers, servers,and associated subsystems like displays,printers, storage devices, and networking and communication systems while minimising or eliminatingenvironmental impact. The most significant reason for a data centre review is to understand capacity,dependability, durability,algorithmic efficiency, resource allocation, virtualization, power management, andother elements. The green cloud design aims to reduce data centre power consumption. The main advantageof green cloud computing architecture is that it ensures real-time performance whilereducing IDC’s energyconsumption (internet data center).This paper analyzed the difficultiesfaced by data centers such as capacityplanning and management, up-time and performance maintenance, energy efficiency and cost cutting, realtime monitoring and reporting. The solution for the identified problems with DCIM system is also presentedin this paper. Finally, it discusses the market report’s coverage of green data centres, green computingprinciples, andfuture research challenges. This comprehensive green cloud analysis study will assist nativegreen research fellows in learning about green cloud concerns and understanding future research challengesin the field

    High Frequency Signaling Analysis Of Inter-Chip Package Routing For Multi-Chip Package

    Get PDF
    Multi-Chip Package (MCP) is becoming a customary form of integration in many high performance and advanced electronic devices. The vast adoptions of this technology are mainly contributed by the advantages for instance lower power consumption, heterogeneous integration of multiple silicon process technologies and manufacturers, shorter time-to-market and lower costs. However, the high density inter-chip I/O routing within package will presents unique signaling challenges when coupled with high operating data rate. Tackling the right issue at early design stage is essential to avoid the pitfall of redesign. Thus, with the aim to establish the design guideline to enable high performance MCP channel, this research focuses on the signaling analysis of the inter-chip I/O package routing between silicon devices in MCP. In this study, signal quality and eye margin sensitivity were evaluated from 2.5 GHz up-to 7.5 GHz. The microwave effect is found dominating the transmission line component that resulted in signal quality deteriorations. Key limiting factors such as crosstalk coupling effects, signal reflections and frequency dependent losses that caused signal quality degradations were identified and categorized from 2.5 GHz to 7.5 GHz with channel length of 3 mm to 30 mm for future MCP design considerations. Moreover, various low power passive signaling enhancement techniques i.e. equalization and termination to mitigate the signal integrity challenges of the high speed on-package inter-chip channels has been analyzed
    corecore