100,774 research outputs found

    Architectural study of reconfigurable photonic networks-on-chip for multi-core processors

    Get PDF
    Photonic Networks-on-Chip (NoCs) have become a promising route to interconnect processor cores on chip multiprocessors (CMP) in a power efficient way. Although several photonic NoC proposals exist, their use is limited to the communication of large data messages due to a relatively long set-up time for the photonic channels. In this work, we evaluate a reconfigurable photonic NoC in which the topology is adapted automatically to the evolving traffic situation. This way, long photonic channel set-up times can be tolerated which makes our approach more compatible in the context of shared-memory CMPs

    Low-power reconfigurable network architecture for on-chip photonic interconnects

    Get PDF
    Photonic Networks-On-Chip have emerged as a viable solution for interconnecting multicore computer architectures in a power-efficient manner. Current architectures focus on large messages, however, which are not compatible with the coherence traffic found on chip multiprocessor networks. In this paper, we introduce a reconfigurable optical interconnect in which the topology is adapted automatically to the evolving traffic situation. This allows a large fraction of the (short) coherence messages to use the optical links, making our technique a better match for CMP networks when compared to existing solutions. We also evaluate the performance and power efficiency of our architecture using an assumed physical implementation based on ultra-low power optical switching devices and under realistic traffic load conditions

    Quarc: an architecture for efficient on-chip communication

    Get PDF
    The exponential downscaling of the feature size has enforced a paradigm shift from computation-based design to communication-based design in system on chip development. Buses, the traditional communication architecture in systems on chip, are incapable of addressing the increasing bandwidth requirements of future large systems. Networks on chip have emerged as an interconnection architecture offering unique solutions to the technological and design issues related to communication in future systems on chip. The transition from buses as a shared medium to networks on chip as a segmented medium has given rise to new challenges in system on chip realm. By leveraging the shared nature of the communication medium, buses have been highly efficient in delivering multicast communication. The segmented nature of networks, however, inhibits the multicast messages to be delivered as efficiently by networks on chip. Relying on extensive research on multicast communication in parallel computers, several network on chip architectures have offered mechanisms to perform the operation, while conforming to resource constraints of the network on chip paradigm. Multicast communication in majority of these networks on chip is implemented by establishing a connection between source and all multicast destinations before the message transmission commences. Establishing the connections incurs an overhead and, therefore, is not desirable; in particular in latency sensitive services such as cache coherence. To address high performance multicast communication, this research presents Quarc, a novel network on chip architecture. The Quarc architecture targets an area-efficient, low power, high performance implementation. The thesis covers a detailed representation of the building blocks of the architecture, including topology, router and network interface. The cost and performance comparison of the Quarc architecture against other network on chip architectures reveals that the Quarc architecture is a highly efficient architecture. Moreover, the thesis introduces novel performance models of complex traffic patterns, including multicast and quality of service-aware communication

    MorphIC: A 65-nm 738k-Synapse/mm2^2 Quad-Core Binary-Weight Digital Neuromorphic Processor with Stochastic Spike-Driven Online Learning

    Full text link
    Recent trends in the field of neural network accelerators investigate weight quantization as a means to increase the resource- and power-efficiency of hardware devices. As full on-chip weight storage is necessary to avoid the high energy cost of off-chip memory accesses, memory reduction requirements for weight storage pushed toward the use of binary weights, which were demonstrated to have a limited accuracy reduction on many applications when quantization-aware training techniques are used. In parallel, spiking neural network (SNN) architectures are explored to further reduce power when processing sparse event-based data streams, while on-chip spike-based online learning appears as a key feature for applications constrained in power and resources during the training phase. However, designing power- and area-efficient spiking neural networks still requires the development of specific techniques in order to leverage on-chip online learning on binary weights without compromising the synapse density. In this work, we demonstrate MorphIC, a quad-core binary-weight digital neuromorphic processor embedding a stochastic version of the spike-driven synaptic plasticity (S-SDSP) learning rule and a hierarchical routing fabric for large-scale chip interconnection. The MorphIC SNN processor embeds a total of 2k leaky integrate-and-fire (LIF) neurons and more than two million plastic synapses for an active silicon area of 2.86mm2^2 in 65nm CMOS, achieving a high density of 738k synapses/mm2^2. MorphIC demonstrates an order-of-magnitude improvement in the area-accuracy tradeoff on the MNIST classification task compared to previously-proposed SNNs, while having no penalty in the energy-accuracy tradeoff.Comment: This document is the paper as accepted for publication in the IEEE Transactions on Biomedical Circuits and Systems journal (2019), the fully-edited paper is available at https://ieeexplore.ieee.org/document/876400

    Head-of-Line Blocking Reduction in Power-Efficient Networks-on-Chip

    Full text link
    Tesis por compendioNowadays, thanks to the continuous improvements in the integration scale, more and more cores are added on the same chip, leading to higher system performance. In order to interconnect all nodes, a network-on-chip (NoC) is used, which is in charge of delivering data between cores. However, increasing the number of cores leads to a significant power consumption increase, leading the NoC to be one of the most expensive components in terms of power. Because of this, during the last years, several mechanisms have been proposed to address the NoC power consumption by means of DVFS (Dynamic Voltage and Frequency Scaling) and power-gating strategies. Nevertheless, improvements achieved by these mechanisms are achieved, to a greater or lesser extent, at the cost of system performance, potentially increasing the risk of saturating the network by forming congested points which, in turn, compromise the rest of the system functionality. One side effect is the creation of the "Head-of-Line blocking" effect where congested packets at the head of queues prevent other non-blocked packets from advancing. To address this issue, in this thesis, on one hand, we propose novel congestion control techniques in order to improve system performance by removing the "Head-of-Line" blocking effect. On the other hand, we propose combined solutions adapted to DVFS in order to achieve improvements in terms of performance and power. In addition to this, we propose a path-aware power-gating-based mechanism, which is capable of detecting the flows sharing buffer resources along data paths and perform to switch them off when not needed. With all these combined solutions we can significantly reduce the power consumption of the NoC when compared with state-of-the-art proposals.Hoy en día, gracias a las mejoras en la escala de integración cada vez se integran más y más núcleos en un mismo chip, mejorando así sus prestaciones. Para interconectar todos los nodos dentro del chip se emplea una red en chip (NoC, Network-on-Chip), la cual es la encargada de intercambiar información entre núcleos. No obstante, aumentar el número de núcleos en el chip también conlleva a su vez un importante incremento en el consumo de la NoC, haciendo que ésta se convierta en una de las partes más caras del chip en términos de consumo. Por ello, en los últimos años se han propuesto diversas técnicas de ahorro de energía orientadas a reducir el consumo de la NoC mediante el uso de DVFS (Dynamic Voltage and Frequency Scaling) o estrategias basadas en "power-gating". Sin embargo, éstas mejoras de consumo normalmente se obtienen a costa de sacrificar, en mayor o menor medida, las prestaciones del sistema, aumentado potencialmente así el riesgo de saturar la red, generando puntos de congestión que, a su vez, comprometen el rendimiento del resto del sistema. Un efecto colateral es el "Head-of-Line blocking", mediante el que paquetes congestionados en la cabeza de la cola impiden que otros paquetes no congestionados avancen. Con el fin de solucionar este problema, en ésta tesis, en primer lugar, proponemos técnicas novedosas de control de congestión para incrementar el rendimiento del sistema mediante la eliminación del "Head-of-Line blocking", mientras que, por otra parte, proponemos soluciones combinadas adaptadas a DVFS con el fin de conseguir mejoras en términos de rendimiento y energía. Además, proponemos una técnica de "power-gating" orientada a rutas de datos, la cual es capaz de detectar flujos de datos compartiendo recursos a lo largo de rutas y apagar dichos recursos de forma dinámica cuando no son necesarios. Con todas éstas soluciones combinadas podemos reducir el consumo de energía de la NoC en comparación con otras técnicas presentes en el estado del arte.Hui en dia, gr\`acies a les millores en l'escala d'integraci\'o, cada vegada s'integren m\'es i m\'es nuclis en un mateix xip, la qual cosa millora les seues prestacions. Per tal d'interconectar tots els nodes dins el xip es fa \'us d'una Xarxa en Xip (NoC; Network-on-Chip), la qual \'es l'encarregada d'intercanviar informaci\'o entre els nuclis. No obstant aix\`o, incrementar el nombre de nuclis en el xip tamb\'e comporta un important augment en el consum de la NoC, la qual cosa fa que aquesta es convertisca en una de les parts m\'es costoses del xip en termes de consum. Per aix\`o, en els \'ultims anys s'han proposat diverses t\`ecniques d'estalvi d'energia orientades a reduir el consum de la NoC mitjançant l'\'us de DVFS (Dynamic Voltage and Frequency Scaling) o estrat\`egies basades en ``power-gating''. Malgrat aix\`o, aquestes millores en les prestacions normalment s'obtenen a costa de sacrificar, en major o menor mesura, les prestacions del sistema i augmenta aix\'i el risc de saturar la xarxa al generar-se punts de congesti\'o, que al mateix temps, comprometen el rendiment de la resta del sistema. Un efecte col-lateral \'es el ``Head-of- Line blocking'', mitjançant el qual, els paquets congestionats al cap de la cua, impedixen que altres paquets no congestionats avancen. A fi de solucionar eixe problema, en aquesta tesi, en primer lloc, proposem noves t\`ecniques de control de congesti\'o amb l'objectiu d'incrementar el rendiment del sistema per mitj\`a de l'eliminaci\'o del ``Head-of- Line blocking'', i d'altra banda, proposem solucions combinades adaptades a DVFS amb la finalitat d'aconseguir millores en termes de rendiment i energia. A m\'es, proposem una t\`ecnica de ``power-gating'' orientada a rutes de dades, la qual \'es capa\c c de detectar fluxos de dades al compartir recursos al llarg de les rutes i apagar eixos recursos de forma din\`amica quan no s\'on necessaris. Amb totes aquestes solucions combinades podem reduir el consum d'energia de la NoC en comparaci\'o amb altres t\`ecniques presents en l'estat de l'art.Escamilla López, JV. (2017). Head-of-Line Blocking Reduction in Power-Efficient Networks-on-Chip [Tesis doctoral no publicada]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/90419TESISCompendi

    Significance Driven Hybrid 8T-6T SRAM for Energy-Efficient Synaptic Storage in Artificial Neural Networks

    Full text link
    Multilayered artificial neural networks (ANN) have found widespread utility in classification and recognition applications. The scale and complexity of such networks together with the inadequacies of general purpose computing platforms have led to a significant interest in the development of efficient hardware implementations. In this work, we focus on designing energy efficient on-chip storage for the synaptic weights. In order to minimize the power consumption of typical digital CMOS implementations of such large-scale networks, the digital neurons could be operated reliably at scaled voltages by reducing the clock frequency. On the contrary, the on-chip synaptic storage designed using a conventional 6T SRAM is susceptible to bitcell failures at reduced voltages. However, the intrinsic error resiliency of NNs to small synaptic weight perturbations enables us to scale the operating voltage of the 6TSRAM. Our analysis on a widely used digit recognition dataset indicates that the voltage can be scaled by 200mV from the nominal operating voltage (950mV) for practically no loss (less than 0.5%) in accuracy (22nm predictive technology). Scaling beyond that causes substantial performance degradation owing to increased probability of failures in the MSBs of the synaptic weights. We, therefore propose a significance driven hybrid 8T-6T SRAM, wherein the sensitive MSBs are stored in 8T bitcells that are robust at scaled voltages due to decoupled read and write paths. In an effort to further minimize the area penalty, we present a synaptic-sensitivity driven hybrid memory architecture consisting of multiple 8T-6T SRAM banks. Our circuit to system-level simulation framework shows that the proposed synaptic-sensitivity driven architecture provides a 30.91% reduction in the memory access power with a 10.41% area overhead, for less than 1% loss in the classification accuracy.Comment: Accepted in Design, Automation and Test in Europe 2016 conference (DATE-2016

    Efficient Sharing of Optical Resources in Low-Power Optical Networks-on-Chip

    Get PDF
    With the ever-growing core counts in modern computing systems, NoCs consume an increasing part of the power budget due to bandwidth and power density limitations of electrical interconnects. To maintain performance and power scaling, alternative technologies are required, with silicon photonics, sophisticated network designs are required to minimize static power overheads. In this paper, we propose Amon, a low-power ONoC that decreases number of μRings, wavelengths and path losses to reduce power consumption. Amom performs destination checking prior to data transmission on an underlying control network, allowing the sharing per-Watt by at least 23% (up to 70%), while reducing power without latency overheads on both synthetic and realistic applications. For aggressive optical technology parameters, Amom considerably outperforms all alternative NoCs in terms of power, outlining its increasing superiority as technology matures
    • …
    corecore