4 research outputs found

    GPU NTC Process Variation Compensation with Voltage Stacking

    Get PDF
    Near-threshold computing (NTC) has the potential to significantly improve efficiency in high throughput architectures, such as general-purpose computing on graphic processing unit (GPGPU). Nevertheless, NTC is more sensitive to process variation (PV) as it complicates power delivery. We propose GPU stacking, a novel method based on voltage stacking, to manage the effects of PV and improve the power delivery simultaneously. To evaluate our methodology, we first explore the design space of GPGPUs in the NTC to find a suitable baseline configuration and then apply GPU stacking to mitigate the effects of PV. When comparing with an equivalent NTC GPGPU without PV management, we achieve 37% more performance on average. When considering high production volume, our approach shifts all the chips closer to the nominal non-PV case, delivering on average (across chips) ˜80 % of the performance of nominal NTC GPGPU, whereas when not using our technique, chips would have ˜50 % of the nominal performance. We also show that our approach can be applied on top of multifrequency domain designs, improving the overall performance

    Light-load power management in differential power processing systems

    Get PDF
    Series stacking is used as a means of implicitly raising DC bus voltages without additional power processing and has been explored widely in the context of photovoltaic sources and batteries in the past. More recently it has also been explored in the context of server loads and microprocessor cores. Supplying power at a higher voltage supports a reduction in conduction losses and reduces complexity in power supply design related to the high current at low voltage nature of microprocessor loads. However, series stacking of DC voltage domains forces the dc voltage domains to share the same currents. In the context of series stacked loads, this would lead to failure of voltage regulation of individual dc voltage domains. Additional power electronics, commonly referred to as differential power processing (DPP) units are required to perform this vital task. The idea is to let the DPP converters (which need to have bidirectional capability) process the difference between currents of adjacent voltage domains, so that the load voltages are regulated. Although series stacking and DPP has been explored in significant detail, the importance of light load efficiencies of these DPP converters has not been highlighted enough in the past. In this document we discuss the importance of light load control in common series stacked systems with DPP and propose a light load power management scheme for bidirectional buck-boost converters (which is the building block of most DPP converter topologies). Extending efficient operation load range of converters (to process higher power in rare heavily mismatched conditions and to maintain good light load efficiencies at the same time) with multiphase converters and asymmetric current sharing is also discussed in the context of DPP converters. We finally propose to build a series stacked system of low voltage loads and DPP regulators to demonstrate the advantages of series stacking as opposed to the conventional parallel connection

    Efficient and Scalable Computing for Resource-Constrained Cyber-Physical Systems: A Layered Approach

    Get PDF
    With the evolution of computing and communication technology, cyber-physical systems such as self-driving cars, unmanned aerial vehicles, and mobile cognitive robots are achieving increasing levels of multifunctionality and miniaturization, enabling them to execute versatile tasks in a resource-constrained environment. Therefore, the computing systems that power these resource-constrained cyber-physical systems (RCCPSs) have to achieve high efficiency and scalability. First of all, given a fixed amount of onboard energy, these computing systems should not only be power-efficient but also exhibit sufficiently high performance to gracefully handle complex algorithms for learning-based perception and AI-driven decision-making. Meanwhile, scalability requires that the current computing system and its components can be extended both horizontally, with more resources, and vertically, with emerging advanced technology. To achieve efficient and scalable computing systems in RCCPSs, my research broadly investigates a set of techniques and solutions via a bottom-up layered approach. This layered approach leverages the characteristics of each system layer (e.g., the circuit, architecture, and operating system layers) and their interactions to discover and explore the optimal system tradeoffs among performance, efficiency, and scalability. At the circuit layer, we investigate the benefits of novel power delivery and management schemes enabled by integrated voltage regulators (IVRs). Then, between the circuit and microarchitecture/architecture layers, we present a voltage-stacked power delivery system that offers best-in-class power delivery efficiency for many-core systems. After this, using Graphics Processing Units (GPUs) as a case study, we develop a real-time resource scheduling framework at the architecture and operating system layers for heterogeneous computing platforms with guaranteed task deadlines. Finally, fast dynamic voltage and frequency scaling (DVFS) based power management across the circuit, architecture, and operating system layers is studied through a learning-based hierarchical power management strategy for multi-/many-core systems

    Improving data center power delivery efficiency and power density with differential power processing and multilevel power converters

    Get PDF
    Existing data center power delivery architectures consist of many cascaded power conversion stages. The system-level power delivery efficiency decreases each time the requisite power is processed through the individual stages, and the total power converter footprint increases by each cascaded conversion stage. Innovative approaches are investigated in this dissertation for dc-dc step-down conversion and single-phase ac-dc conversion to improve power delivery efficiency and power density in data centers. This dissertation proposes a series-stacked architecture that provides inherently higher efficiency between a dc bus and dc loads through architectural changes, reporting above 99% power delivery efficiencies. The proposed series-stacked architecture increases power delivery efficiency by connecting the dc loads in series to allow the bulk of the requisite power to be delivered without being processed and by reducing overall power conversion using differential power processing. The series-stacked architecture exhibits voltage regulation and hot-swapping while delivering power to rapidly changing computational loads. This dissertation experimentally demonstrates series-stacked power delivery using real-life computational loads in a custom designed four-server rack. In order to provide a complete grid-to-12 V power delivery for data center applications, this dissertation also proposes a buck-type power factor correction converter that yields high power density between a single-phase grid and the dc bus, achieving 79 W/in3 power density. The proposed buck-type power factor correction converter improves power density by eliminating the high-voltage step-down dc-dc conversion stage, which is typically cascaded to boost-type power factor correction converters in conventional data center power delivery architectures, and by leveraging recent developments in flying capacitor multilevel converters using wide-bandgap transistors. The buck-type flying capacitor multilevel power factor correction converter presents a unique operation condition where the flying capacitor voltages are required to follow the input voltage at 50/60 Hz. This dissertation experimentally explores the applicability of such an operation by using a digitally controlled six-level flying capacitor multilevel converter prototype
    corecore