5,375 research outputs found

    Low-Power, High-Speed Transceivers for Network-on-Chip Communication

    Get PDF
    Networks on chips (NoCs) are becoming popular as they provide a solution for the interconnection problems on large integrated circuits (ICs). But even in a NoC, link-power can become unacceptably high and data rates are limited when conventional data transceivers are used. In this paper, we present a low-power, high-speed source-synchronous link transceiver which enables a factor 3.3 reduction in link power together with an 80% increase in data-rate. A low-swing capacitive pre-emphasis transmitter in combination with a double-tail sense-amplifier enable speeds in excess of 9 Gb/s over a 2 mm twisted differential interconnect, while consuming only 130 fJ/transition without the need for an additional supply. Multiple transceivers can be connected back-to-back to create a source-synchronous transceiver-chain with a wave-pipelined clock, operating with 6sigma offset reliability at 5 Gb/s

    Energy-Efficient Interconnection Networks for High-Performance Computing

    Get PDF
    In recent years, energy has become one of the most important factors for de- signing and operating large scale computing systems. This is particularly true in high-performance computing, where systems often consist of thousands of nodes. Especially after the end of Dennard’s scaling, the demand for energy- proportionality in components, where energy is depending linearly on utilization, increases continuously. As the main contributor to the overall power consumption, processors have received the main attention so far. The increasing energy proportionality of processors, however, shifts the focus to other components such as interconnection networks. Their share of the overall power consumption is expected to increase to 20% or more while other components further increase their efficiency in the near future. Hence, it is crucial to improve energy proportionality in interconnection networks likewise to reduce overall power and energy consumption. To facilitate these attempts, this work provides comprehensive studies about energy saving in interconnection networks at different levels. First, interconnection networks differ fundamentally from other components in their underlying technology. To gain a deeper understanding of these differences and to identify targets for energy savings, this work provides a detailed power analysis of current network hardware. Furthermore, various applications at different scales are analyzed regarding their communication patterns and locality properties. The findings show that communication makes up only a small fraction of the execution time and networks are actually idling most of the time. Another observation is that point-to-point communication often only occurs within various small subsets of all participants, which indicates that a coordinated mapping could further decrease network traffic. Based on these studies, three different energy-saving policies are designed, which all differ in their implementation and focus. Then, these policies are evaluated in an event-based, power-aware network simulator. While two policies that operate completely local at link level, enable significant energy savings of more than 90% in most analyses, the hybrid one does not provide further benefits despite significant additional design effort. Additionally, these studies include network design parameters, such as transition time between different link configurations, as well as the three most common topologies in supercomputing systems. The final part of this work addresses the interactions of congestion management and energy-saving policies. Although both network management strategies aim for different goals and use opposite approaches, they complement each other and can increase energy efficiency in all studies as well as improve the performance overhead as opposed to plain energy saving

    An Energy and Performance Exploration of Network-on-Chip Architectures

    Get PDF
    In this paper, we explore the designs of a circuit-switched router, a wormhole router, a quality-of-service (QoS) supporting virtual channel router and a speculative virtual channel router and accurately evaluate the energy-performance tradeoffs they offer. Power results from the designs placed and routed in a 90-nm CMOS process show that all the architectures dissipate significant idle state power. The additional energy required to route a packet through the router is then shown to be dominated by the data path. This leads to the key result that, if this trend continues, the use of more elaborate control can be justified and will not be immediately limited by the energy budget. A performance analysis also shows that dynamic resource allocation leads to the lowest network latencies, while static allocation may be used to meet QoS goals. Combining the power and performance figures then allows an energy-latency product to be calculated to judge the efficiency of each of the networks. The speculative virtual channel router was shown to have a very similar efficiency to the wormhole router, while providing a better performance, supporting its use for general purpose designs. Finally, area metrics are also presented to allow a comparison of implementation costs

    Dynamic Voltage and Frequency Scaling for Wireless Network-on-Chip

    Get PDF
    Previously, research and design of Network-on-Chip (NoC) paradigms where mainly focused on improving the performance of the interconnection networks. With emerging wide range of low-power applications and energy constrained high-performance applications, it is highly desirable to have NoCs that are highly energy efficient without incurring performance penalty. In the design of high-performance massive multi-core chips, power and heat have become dominant constrains. Increased power consumption can raise chip temperature, which in turn can decrease chip reliability and performance and increase cooling costs. It was proven that Small-world Wireless Network-on-Chip (SWNoC) architecture which replaces multi-hop wire-line path in a NoC by high-bandwidth single hop long range wireless links, reduces the overall energy dissipation when compared to wire-line mesh-based NoC architecture. However, the overall energy dissipation of the wireless NoC is still dominated by wire-line links and switches (buffers). Dynamic Voltage Scaling is an efficient technique for significant power savings in microprocessors. It has been proposed and deployed in modern microprocessors by exploiting the variance in processor utilization. On a Network-on-Chip paradigm, it is more likely that the wire-line links and buffers are not always fully utilized even for different applications. Hence, by exploiting these characteristics of the links and buffers over different traffic, DVFS technique can be incorporated on these switches and wire-line links for huge power savings. In this thesis, a history based DVFS mechanism is proposed. This mechanism uses the past utilization of the wire-line links & buffers to predict the future traffic and accordingly tune the voltage and frequency for the links and buffers dynamically for each time window. This mechanism dynamically minimizes the power consumption while substantially maintaining a high performance over the system. Performance analysis on these DVFS enabled Wireless NoC shows that, the overall energy dissipation is improved by around 40% when compared Small-world Wireless NoCs

    Overcoming the Challenges for Multichip Integration: A Wireless Interconnect Approach

    Get PDF
    The physical limitations in the area, power density, and yield restrict the scalability of the single-chip multicore system to a relatively small number of cores. Instead of having a large chip, aggregating multiple smaller chips can overcome these physical limitations. Combining multiple dies can be done either by stacking vertically or by placing side-by-side on the same substrate within a single package. However, in order to be widely accepted, both multichip integration techniques need to overcome significant challenges. In the horizontally integrated multichip system, traditional inter-chip I/O does not scale well with technology scaling due to limitations of the pitch. Moreover, to transfer data between cores or memory components from one chip to another, state-of-the-art inter-chip communication over wireline channels require data signals to travel from internal nets to the peripheral I/O ports and then get routed over the inter-chip channels to the I/O port of the destination chip. Following this, the data is finally routed from the I/O to internal nets of the target chip over a wireline interconnect fabric. This multi-hop communication increases energy consumption while decreasing data bandwidth in a multichip system. On the other hand, in vertically integrated multichip system, the high power density resulting from the placement of computational components on top of each other aggravates the thermal issues of the chip leading to degraded performance and reduced reliability. Liquid cooling through microfluidic channels can provide cooling capabilities required for effective management of chip temperatures in vertical integration. However, to reduce the mechanical stresses and at the same time, to ensure temperature uniformity and adequate cooling competencies, the height and width of the microchannels need to be increased. This limits the area available to route Through-Silicon-Vias (TSVs) across the cooling layers and make the co-existence and co-design of TSVs and microchannels extreamly challenging. Research in recent years has demonstrated that on-chip and off-chip wireless interconnects are capable of establishing radio communications within as well as between multiple chips. The primary goal of this dissertation is to propose design principals targeting both horizontally and vertically integrated multichip system to provide high bandwidth, low latency, and energy efficient data communication by utilizing mm-wave wireless interconnects. The proposed solution has two parts: the first part proposes design methodology of a seamless hybrid wired and wireless interconnection network for the horizontally integrated multichip system to enable direct chip-to-chip communication between internal cores. Whereas the second part proposes a Wireless Network-on-Chip (WiNoC) architecture for the vertically integrated multichip system to realize data communication across interlayer microfluidic coolers eliminating the need to place and route signal TSVs through the cooling layers. The integration of wireless interconnect will significantly reduce the complexity of the co-design of TSV based interconnects and microchannel based interlayer cooling. Finally, this dissertation presents a combined trade-off evaluation of such wireless integration system in both horizontal and vertical sense and provides future directions for the design of the multichip system

    Carbon nanotubes as interconnect for next generation network on chip

    Get PDF
    Multi-core processors provide better performance when compared with their single-core equivalent. Recently, Networks-on-Chip (NoC) have emerged as a communication methodology for multi core chips. Network-on-Chip uses packet based communication for establishing a communication path between multiple cores connected via interconnects. Clock frequency, energy consumption and chip size are largely determined by these interconnects. According to the International Technology Roadmap for Semiconductors (ITRS), in the next five years up to 80% of microprocessor power will be consumed by interconnects. In the sub 100nm scaling range, interconnect behavior limits the performance and correctness of VLSI systems. The performance of copper interconnects tend to get reduced in the sub 100nm range and hence we need to examine other interconnect options. Single Wall Carbon Nanotubes exhibit better performance in sub 100nm processing technology due to their very large current carrying capacity and large electron mean free paths. This work suggests using Single Wall Carbon Nanotubes (SWCNT) as interconnects for Networks-on-Chip as they consume less energy and gives more throughput and bandwidth when compared with traditional Copper wires

    Scalability of broadcast performance in wireless network-on-chip

    Get PDF
    Networks-on-Chip (NoCs) are currently the paradigm of choice to interconnect the cores of a chip multiprocessor. However, conventional NoCs may not suffice to fulfill the on-chip communication requirements of processors with hundreds or thousands of cores. The main reason is that the performance of such networks drops as the number of cores grows, especially in the presence of multicast and broadcast traffic. This not only limits the scalability of current multiprocessor architectures, but also sets a performance wall that prevents the development of architectures that generate moderate-to-high levels of multicast. In this paper, a Wireless Network-on-Chip (WNoC) where all cores share a single broadband channel is presented. Such design is conceived to provide low latency and ordered delivery for multicast/broadcast traffic, in an attempt to complement a wireline NoC that will transport the rest of communication flows. To assess the feasibility of this approach, the network performance of WNoC is analyzed as a function of the system size and the channel capacity, and then compared to that of wireline NoCs with embedded multicast support. Based on this evaluation, preliminary results on the potential performance of the proposed hybrid scheme are provided, together with guidelines for the design of MAC protocols for WNoC.Peer ReviewedPostprint (published version

    Using Proportional-Integral-Differential approach for Dynamic Traffic Prediction in Wireless Network-on-Chip

    Get PDF
    The massive integration of cores in multi-core system has enabled chip designer to design systems while meeting the power performance demands of the applications. Wireless interconnection has emerged as an energy efficient solution to the challenges of multi-hop communication over the wireline paths in conventional Networks-on-Chips (NoCs). However, to ensure the full benefits of this novel interconnect technology, design of simple, fair and efficient Medium Access Control (MAC) mechanism to grant access to the on-chip wireless communication channel is needed. Moreover, to adapt to the varying traffic demands from the applications running on a multicore environment, MAC mechanisms should dynamically adjust the transmission slots of the wireless interfaces (WIs). To ensure an efficient utilization of the wireless medium in a Wireless NoC (WiNoC), in this work we present the design of prediction model that is used by two dynamic MAC mechanism to predict the traffic demand of the WIs and respond accordingly by adjusting transmission slots of the WIs. Through system level simulations, we show that the traffic aware MAC mechanisms are more energy efficient as well as capable of sustaining higher data bandwidth in WiNoCs
    • 

    corecore