77 research outputs found

    A Temperature and Reliability Oriented Simulation Framework for Multi-core Architectures

    Get PDF
    The increasing complexity of multi-core architectures demands for a comprehensive evaluation of different solutions and alternatives at every stage of the design process, considering different aspects at the same time. Simulation frameworks are attractive tools to fulfil this requirement, due to their flexibility. Nevertheless, state-of-the-art simulation frameworks lack a joint analysis of power, performance, temperature profile and reliability projection at system-level, focusing only on a specific aspect. This paper presents a comprehensive estimation framework that jointly exploits these design metrics at system-level, considering processing cores, interconnect design and storage elements. We describe the framework in details, and provide a set of experiments that highlight its capability and flexibility, focusing on temperature and reliability analysis of multi-core architectures supported by Network-on-Chip interconnect

    Managing Many-Core Aging

    Get PDF
    Many-core scaling now faces a power wall. The gap between the number of cores that fit on a die and the number that can operate simultaneously under the power budget is rapidly increasing with technology scaling. In future designs, the majority of the cores will necessarily have to be dormant at any given time to meet the power budget. To push back the many-core power wall, this work introduces Dynamic Voltage Scaling for Aging Management (DVSAM) ??? a new scheme for trading off processor aging for performance and power. DVSAM can be used to maximize performance, minimize power, or boost performance for a short life. In addition, this work introduces the BubbleWrap many-core, an architecture that makes use of DVSAM. BubbleWrap identifies the most power-efficient cores on a variation-affected chip and designates them as Throughput cores dedicated to parallel-section execution; the rest of the cores (Expendable cores) are dedicated to sequential sections. In one use of DVSAM, BubbleWrap sacrifices Expendable cores one at a time by running them at elevated V dd for a month or so each, until they completely wear out. Our simulations show that a 32-core BubbleWrap many-core provides substantial improvements over a plain chip. For example, on average, one design runs fully sequential applications at a 22% higher frequency, and fully parallel applications with a 33% higher throughput

    Thermal/performance trade-off in network-on-chip architectures

    Get PDF
    Multi-core architectures are a promising paradigm to exploit the huge integration density reached by high-performance systems. Indeed, integration density and technology scaling are causing undesirable operating temperatures, having net impact on reduced reliability and increased cooling costs. Dynamic Thermal Management (DTM) approaches have been proposed in literature to control temperature profile at run-time, while design-time approaches generally provide floorplan-driven solutions to cope with temperature constraints. Nevertheless, a suitable approach to collect performance, thermal and reliability metrics has not been proposed, yet. This work presents a novel methodology to jointly optimize temperature/performance trade-off in reliable high-performance parallel architectures with security constraints achieved by workload physical isolation on each core. The proposed methodology is based on a linear formal model relating temperature and duty-cycle on one side, and performance and duty-cycle on the other side. Extensive experimental results on real-world use-case scenarios show the goodness of the proposed model, suitable for design-time system-wide optimization to be used in conjunction with DTM technique

    CROSS-LAYER DESIGN, OPTIMIZATION AND PROTOTYPING OF NoCs FOR THE NEXT GENERATION OF HOMOGENEOUS MANY-CORE SYSTEMS

    Get PDF
    This thesis provides a whole set of design methods to enable and manage the runtime heterogeneity of features-rich industry-ready Tile-Based Networkon- Chips at different abstraction layers (Architecture Design, Network Assembling, Testing of NoC, Runtime Operation). The key idea is to maintain the functionalities of the original layers, and to improve the performance of architectures by allowing, joint optimization and layer coordinations. In general purpose systems, we address the microarchitectural challenges by codesigning and co-optimizing feature-rich architectures. In application-specific NoCs, we emphasize the event notification, so that the platform is continuously under control. At the network assembly level, this thesis proposes a Hold Time Robustness technique, to tackle the hold time issue in synchronous NoCs. At the network architectural level, the choice of a suitable synchronization paradigm requires a boost of synthesis flow as well as the coexistence with the DVFS. On one hand this implies the coexistence of mesochronous synchronizers in the network with dual-clock FIFOs at network boundaries. On the other hand, dual-clock FIFOs may be placed across inter-switch links hence removing the need for mesochronous synchronizers. This thesis will study the implications of the above approaches both on the design flow and on the performance and power quality metrics of the network. Once the manycore system is composed together, the issue of testing it arises. This thesis takes on this challenge and engineers various testing infrastructures. At the upper abstraction layer, the thesis addresses the issue of managing the fully operational system and proposes a congestion management technique named HACS. Moreover, some of the ideas of this thesis will undergo an FPGA prototyping. Finally, we provide some features for emerging technology by characterizing the power consumption of Optical NoC Interfaces

    Approaching the theoretical limits of a mesh NoC with a 16-node chip prototype in 45nm SOI

    Get PDF
    In this paper, we present a case study of our chip prototype of a 16-node 4x4 mesh NoC fabricated in 45nm SOI CMOS that aims to simultaneously optimize energy-latency-throughput for unicasts, multicasts and broadcasts. We first define and analyze the theoretical limits of a mesh NoC in latency, throughput and energy, then describe how we approach these limits through a combination of microarchitecture and circuit techniques. Our 1.1V 1GHz NoC chip achieves 1-cycle router-and-link latency at each hop and energy-efficient router-level multicast support, delivering 892Gb/s (87.1% of the theoretical bandwidth limit) at 531.4mW for a mixed traffic of unicasts and broadcasts. Through this fabrication, we derive insights that help guide our research, and we believe, will also be useful to the NoC and multicore research community

    Dynamic power management: from portable devices to high performance computing

    Get PDF
    Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate clouds of portable clients and embedded devices exchanging information, through the internet layer, with processing clusters of servers, data-centers and high performance computing systems. Even thus the whole society is waiting to embrace this revolution, there is a backside of the story. Portable devices require battery to work far from the power plugs and their storage capacity does not scale as the increasing power requirement does. At the other end processing clusters, such as data-centers and server farms, are build upon the integration of thousands multiprocessors. For each of them during the last decade the technology scaling has produced a dramatic increase in power density with significant spatial and temporal variability. This leads to power and temperature hot-spots, which may cause non-uniform ageing and accelerated chip failure. Nonetheless all the heat removed from the silicon translates in high cooling costs. Moreover trend in ICT carbon footprint shows that run-time power consumption of the all spectrum of devices accounts for a significant slice of entire world carbon emissions. This thesis work embrace the full ICT ecosystem and dynamic power consumption concerns by describing a set of new and promising system levels resource management techniques to reduce the power consumption and related issues for two corner cases: Mobile Devices and High Performance Computing

    Memory architectures for exaflop computing systems

    Get PDF
    Most computing systems are heavily dependent on their main memories, as their primary storage, or as an intermediate cache for slower storage systems (HDDs). The capacity of memory systems, as well as their performance, have a direct impact on overall computing capabilities of the system, and are also major contributors to its initial and operating costs. Dynamic Random Access Memory (DRAM) technology has been dominating the main memory landscape since its beginnings in 1970s until today. However, due to DRAM's inherent limitations, its steady rate of development has saturated over the past decade, creating a disparity between CPU and main memory performance, known as the memory wall. Modern parallel architectures, such as High-Performance Computing (HPC) clusters and manycore solutions, create even more stress on their memory systems. It is not trivial to estimate memory requirements that these systems will have in the future, and if DRAM technology would be able to meet them, or we would need to look for a novel memory solution. This thesis attempts to give insight in the most important technological challenges that future memory systems need to address, in order to meet the ever growing requirements of users and their applications, in manycore and HPC context. We try to describe the limitations of DRAM, as the dominant technology in today's main memory systems, that may impede performance or increase cost of future systems. We discuss some of the emerging memory technologies, and by comparing them with DRAM, we try to estimate their potential usage in future memory systems. The thesis evaluates the requirements of manycore scientific applications, in terms of memory bandwidth and footprint, and estimates how these requirements may change in the future. With this evaulation in mind, we propose a hybrid memory solution that employs DRAM and PCM, as well as several page placement and page migration policies, to bridge the gap between fast and small DRAM and larger but slower non-volatile memory. As the aforementioned evaluations required custom software solutions, we present tools we produced over the course of this PhD, which continue to be used in Heterogeneous Computer Architectures group in Barcelona Supercomputing Center. First, Limpio - a LIghtweight MPI instrumentatiOn framework, that provides an interface for low-overhead instrumentation and profiling of MPI applications with user-defined routines. Second, MemTraceMPI, a Valgrind tool, used to produce memory access traces of MPI applications, with several innovative concepts included (filter-cache, iteration tracing, compressed trace files).La mayoría de los sistemas de computación dependen en gran medida de sus principales recuerdos, como su almacenamiento primario, o como un caché intermedio para sistemas de almacenamiento más lentos (discos duros). La capacidad de los sistemas de memoria, así como su rendimiento, tienen un impacto directo en las capacidades globales de computación del sistema, y también son los principales contribuyentes a sus costos iniciales y de operación. Tecnología Dynamic Random Access memoria (DRAM) ha estado dominando el principal paisaje de memoria desde sus inicios en 1970 hasta la actualidad. Sin embargo, debido a las limitaciones inherentes de DRAM, su tasa constante de desarrollo ha saturado durante la última década, creando una disparidad entre la CPU y el rendimiento de la memoria principal, conocido como el muro de la memoria. Arquitecturas modernas paralelas, como la computación (HPC) de alto rendimiento y soluciones manycore, crear aún más presión sobre sus sistemas de memoria. No es trivial para estimar los requisitos de memoria que estos sistemas tendrán en el futuro, y si la tecnología DRAM sería capaz de cumplir con ellas, o que tendría que buscar una solución de memoria novela. En esta tesis se intenta dar una idea de los más importantes retos tecnológicos que los sistemas de memoria futuras deben abordar, con el fin de satisfacer las necesidades cada vez mayores de los usuarios y sus aplicaciones, en Manycore y HPC contexto. Intentamos describir las limitaciones de memoria DRAM, como la tecnología dominante en los sistemas de memoria principal de hoy en día, que pueden impedir el rendimiento o el aumento de los costos de los sistemas futuros. Se discuten algunas de las tecnologías de memoria emergentes, y comparándolos con DRAM, tratamos de estimar su uso potencial en sistemas de memoria futuras. La tesis evalúa los requisitos de las aplicaciones científicas manycore, en términos de ancho de banda de memoria y huella, y estima cómo estos requisitos pueden cambiar en el futuro. Con esta evaulation en mente, proponemos una solución de memoria híbrida que emplea DRAM y PCM, así como varias políticas de colocación de la página y la página de la migración, para cerrar la brecha entre la DRAM rápido y pequeño y más grande pero la memoria más lenta no volátil. Como las evaluaciones mencionadas necesarias soluciones de software personalizadas, se presentan las herramientas que hemos producido en el transcurso de esta tesis doctoral, que se siguen utilizando en el grupo heterogéneo de computadoras Arquitecturas en Barcelona Supercomputing Center. En primer lugar, Limpio - un marco MPI Instrumentación ligero, que proporciona una interfaz para la instrumentación de baja sobrecarga y perfilado de aplicaciones MPI con rutinas definidas por el usuario. En segundo lugar, MemTraceMPI, una herramienta Valgrind, utilizado para producir los rastros de acceso a memoria de aplicaciones MPI, con varios conceptos innovadores incluido (filtro-cache, trazado iteración, archivos de seguimiento comprimido)

    Managing lifetime reliability, performance, and power tradeoffs in multicore microarchitectures

    Get PDF
    The objective of this research is to characterize and manage lifetime reliability, microarchitectural performance, and power tradeoffs in multicore processors. This dissertation is comprised of three research themes; 1) modeling and simulation method of interacting multicore processor physics, 2) characterization and management of performance and lifetime reliability tradeoff, and 3) extending Amdahl’s Law for understanding lifetime reliability, performance, and energy efficiency of heterogeneous processors. With continued technology scaling, processor operations are increasingly dominated by multiple distinct physical phenomena and their coupled interactions. Understanding these behaviors requires the modeling of complex physical interactions. This dissertation first presents a novel simulation framework that orchestrates interactions between multiple physical models and microarchitecture simulators to enable research explorations at the intersection of application, microarchitecture, energy, power, thermal, and reliability. Using this framework, workload-induced variation of device degradation is characterized, and its impacts on processor lifetime and performance are analyzed. This research introduces a new metric to quantify performance-reliability tradeoff. Lastly, the theoretical models of heterogeneous multicore processors are proposed for understanding performance, energy efficiency, and lifetime reliability consequences. It is shown that these system metrics are governed by Amdahl’s Law and correlated as a function of processor composition, scheduling method, and Amdahl’s scaling factor. This dissertation highlights the importance of multidimensional analysis and extends the scope of microarchitectural studies by incorporating the physical aspects of processor operations and designs.Ph.D

    Computational Sprinting: Exceeding Sustainable Power in Thermally Constrained Systems

    Get PDF
    Although process technology trends predict that transistor sizes will continue to shrink for a few more generations, voltage scaling has stalled and thus future chips are projected to be increasingly more power hungry than previous generations. Particularly in mobile devices which are severely cooling constrained, it is estimated that the peak operation of a future chip could generate heat ten times faster than than the device can sustainably vent. However, many mobile applications do not demand sustained performance; rather they comprise short bursts of computation in response to sporadic user activity. To improve responsiveness for such applications, this dissertation proposes computational sprinting, in which a system greatly exceeds sustainable power margins (by up to 10Ã?) to provide up to a few seconds of high-performance computation when a user interacts with the device. Computational sprinting exploits the material property of thermal capacitance to temporarily store the excess heat generated when sprinting. After sprinting, the chip returns to sustainable power levels and dissipates the stored heat when the system is idle. This dissertation: (i) broadly analyzes thermal, electrical, hardware, and software considerations to analyze the feasibility of engineering a system which can provide the responsiveness of a plat- form with 10Ã? higher sustainable power within today\u27s cooling constraints, (ii) leverages existing sources of thermal capacitance to demonstrate sprinting on a real system today, and (iii) identifies the energy-performance characteristics of sprinting operation to determine runtime sprint pacing policies
    corecore