1,007 research outputs found

    Characterizing 10 Gbps network interface energy consumption

    Full text link

    Trade-off between power and bandwidth consumption in a reconfigurable xhaul network architecture

    Get PDF
    The increasing number of wireless devices, the high required traffic bandwidth, and power consumption will lead to a revolution of mobile access networks, which is not a simple evolution of traditional ones. Cloud radio access network technologies are seen as promising solution in order to deal with the heavy requirements defined for 5G mobile networks. The introduction of the common public radio interface (CPRI) technology allows for a centralization in BaseBand unit (BBU) of some access functions with advantages in terms of power consumption saving when switching off algorithms are implemented. Unfortunately, the advantages of the CPRI technology are to be paid with an increase in required bandwidth to carry the traffic between the BBU and the radio remote unit (RRU), in which only the radio functions are implemented. For this reason, a tradeoff solution between power and bandwidth consumption is proposed and evaluated. The proposed solution consists of: 1) handling the traffic generated by the users through both RRU and traditional radio base stations (RBS) and 2) carrying the traffic generated by the RRU and RBS (CPRI and Ethernet flows) with a reconfigurable network. The proposed solution is investigated under the lognormal spatial traffic distribution assumption. After proposing resource dimensioning analytical models validated by simulation, we show how the sum of the bandwidth and power consumption may be minimized with the deployment of a given percentage of RRU. For instance we show how in 5G traffic scenarios this percentage can vary from 30% to 50% according to total traffic amount handled by a switching node of the reconfigurable network

    Optimization of energy efficiency in data and WEB hosting centers

    Get PDF
    Mención Internacional en el título de doctorThis thesis tackles the optimization of energy efficiency in data centers in terms of network and server utilization. For what concerns networking utilization the work focuses on Energy Efficient Ethernet (EEE) - IEEE 802.3az standard - which is the energy-aware alternative to legacy Ethernet, and an important component of current and future green data centers. More specifically the first contribution of this thesis consists in deriving and analytical model of gigabit EEE links with coalescing using M/G/1 queues with sleep and wake-up periods. Packet coalescing has been proposed to save energy by extending the sojourn in the Low Power Idle state of EEE. The model presented in this thesis approximates with a good accuracy both the energy saving and the average packet delay by using a few significant traffic descriptors. While coalescing improves by far the energy efficiency of EEE, it is still far from achieving energy consumption proportional to traffic. Moreover, coalescing can introduce high delays. To this extend, by using sensitivity analysis the thesis evaluates the impact of coalescing timers and buffer sizes, and sheds light on the delay incurred by adopting coalescing schemes. Accordingly, the design and study of a first family of dynamic algorithms, namely measurement-based coalescing control (MBCC), is proposed. MBCC schemes tune the coalescing parameters on-the-fly, according to the instantaneous load and the coalescing delay experienced by the packets. The thesis also discusses a second family of dynamic algorithms, namely NT-policy coalescing control (NTCC), that adjusts the coalescing parameters based on the sole occurrence of timeouts and buffer fill-ups. Furthermore, the performance of static as well as dynamic coalescing schemes is investigated using real traffic traces. The results reported in this work show that, by relying on run-time delay measurements, simple and practical MBCC adaptive coalescing schemes outperform traditional static and dynamic coalescing while the adoption of NTCC coalescing schemes has practically no advantages with respect to static coalescing when delay guarantees have to be provided. Notably, MBCC schemes double the energy saving benefit of legacy EEE coalescing and allow to control the coalescing delay. For what concerns server utilization, the thesis presents an exhaustive empirical characterization of the power requirements of multiple components of data center servers. The characterization is the second key contribution of this thesis, and is achieved by devising different experiments to stress server components, taking into account the multiple available CPU frequencies and the presence of multicore servers. The described experiments, allow to measure energy consumption of server components and identify their optimal operational points. The study proves that the curve defining the minimal CPU power utilization, as a function of the load expressed in Active Cycles Per Second, is neither concave nor purely convex. Instead, it definitively shows a superlinear dependence on the load. The results illustrate how to improve the efficiency of network cards and disks. Finally, the accuracy of the model derived from the server components consumption characterization is validated by comparing the real energy consumed by two Hadoop applications - PageRank and WordCount - with the estimation from the model, obtaining errors below 4:1%, on average.This work has been partially supported by IMDEA Networks Institute and the Greek State Scholarships FoundationPrograma Oficial de Doctorado en Ingeniería TelemáticaPresidente: Marco Giuseppe Ajmone Marsan.- Secretario: Jose Luis Ayala Rodrigo.- Vocal: Gianluca Antonio Rizz

    Performance Implications of NoCs on 3D-Stacked Memories: Insights from the Hybrid Memory Cube

    Full text link
    Memories that exploit three-dimensional (3D)-stacking technology, which integrate memory and logic dies in a single stack, are becoming popular. These memories, such as Hybrid Memory Cube (HMC), utilize a network-on-chip (NoC) design for connecting their internal structural organizations. This novel usage of NoC, in addition to aiding processing-in-memory capabilities, enables numerous benefits such as high bandwidth and memory-level parallelism. However, the implications of NoCs on the characteristics of 3D-stacked memories in terms of memory access latency and bandwidth have not been fully explored. This paper addresses this knowledge gap by (i) characterizing an HMC prototype on the AC-510 accelerator board and revealing its access latency behaviors, and (ii) by investigating the implications of such behaviors on system and software designs

    A Measurement-based Analysis of the Energy Consumption of Data Center Servers

    Full text link
    Energy consumption is a growing issue in data centers, impacting their economic viability and their public image. In this work we empirically characterize the power and energy consumed by different types of servers. In particular, in order to understand the behavior of their energy and power consumption, we perform measurements in different servers. In each of them, we exhaustively measure the power consumed by the CPU, the disk, and the network interface under different configurations, identifying the optimal operational levels. One interesting conclusion of our study is that the curve that defines the minimal CPU power as a function of the load is neither linear nor purely convex as has been previously assumed. Moreover, we find that the efficiency of the various server components can be maximized by tuning the CPU frequency and the number of active cores as a function of the system and network load, while the block size of I/O operations should be always maximized by applications. We also show how to estimate the energy consumed by an application as a function of some simple parameters, like the CPU load, and the disk and network activity. We validate the proposed approach by accurately estimating the energy of a map-reduce computation in a Hadoop platform

    A Vision for 5G Channel Coding

    No full text
    Channel coding is a vital but complex component of cellular communication systems, which is used for correcting the communication errors that are caused by noise, interference and poor signal strength. The turbo code was selected as the main channel code in 3G and 4G cellular systems, but the 3GPP standardization group is currently debating whether it should be replaced by the Low Density Parity Check (LDPC) code in 5G. This debate is being driven by the requirements for 5G, which include throughputs of up to 20 Gbps in the downlink to user devices, ultra-low latencies, as well as much greater flexibility to support diverse use-cases, including broadband data, Internet of Things (IoT), vehicular communications and cloud computing. In our previous white paper, we demonstrated that flexible turbo codes can achieve these requirements with superior hardware- and energy-efficiencies than flexible LDPC decoders. However, the proponents of LDPC codes have highlighted that inflexible LDPC decoders can achieve throughputs of 20 Gbps with particularly attractive hardware- and energy- efficiencies. This white paper outlines a vision for 5G, in which channel coding is provided by a flexible turbo code for most use-cases, but which is supported by an inflexible LDPC code for 20 Gbps downlink use-cases, such as fixed wireless broadband. We demonstrate that this approach can meet all of the 5G requirements, while offering hardware- and energy-efficiencies that are significantly better than those of an LDPC-only solution. Furthermore, the proposed approach benefits from synergy with the 3G and 4G turbo code, as well as a significantly faster time-to-market for 5G. These benefits translate to a 5G that is significantly more capable, significantly easier to deploy and significantly lower cost
    • …
    corecore