2,875 research outputs found

    Information and communication technologies in Germany : Is there a remaining role for sector-specific regulation?

    Get PDF
    In order to analyze the remaining role for sector-specific regulation the focus of this paper is on those elements of the Internet periphery and Internet service provision, which are strongly based on telecommunications, in particular Internet access and Internet backbone. Section 2 deals with the role of telecommunications for the Internet, differentiating between local network access and long distance network capacity. In section 3 the new regulatory arrangements for communications services within Europe, with particular emphasis on Germany, are explained. In order to analyze the future role of sector-specific regulation from a normative point of view, in section 4 the network economic concept of a disaggregated regulatory approach is provided. Section 5 deals with phasing-out potentials for sector-specific regulation due to increasing competition within the local loop. In section 6 the role of technology-neutral regulation is considered, which implies that in an environment of competing network infrastructures sector- specific regulation should not be extended, but removed. Finally, section 7 explains the role of competition in the markets for backbone interconnectivity. --

    Disaggregated Computing. An Evaluation of Current Trends for Datacentres

    Get PDF
    Next generation data centers will likely be based on the emerging paradigm of disaggregated function-blocks-as-a-unit departing from the current state of mainboard-as-a-unit. Multiple functional blocks or bricks such as compute, memory and peripheral will be spread through the entire system and interconnected together via one or multiple high speed networks. The amount of memory available will be very large distributed among multiple bricks. This new architecture brings various benefits that are desirable in today’s data centers such as fine-grained technology upgrade cycles, fine-grained resource allocation, and access to a larger amount of memory and accelerators. An analysis of the impact and benefits of memory disaggregation is presented in this paper. One of the biggest challenges when analyzing these architectures is that memory accesses should be modeled correctly in order to obtain accurate results. However, modeling every memory access would generate a high overhead that can make the simulation unfeasible for real data center applications. A model to represent and analyze memory disaggregation has been designed and a statistics-based queuing-based full system simulator was developed to rapidly and accurately analyze applications performance in disaggregated systems. With a mean error of 10%, simulation results pointed out that the network layers may introduce overheads that degrade applications’ performance up to 66%. Initial results also suggest that low memory access bandwidth may degrade up to 20% applications’ performance.This project has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No 687632 (dReDBox project) and TIN2015-65316-P - Computacion de Altas Prestaciones VII.Peer ReviewedPostprint (published version

    The future of computing beyond Moore's Law.

    Get PDF
    Moore's Law is a techno-economic model that has enabled the information technology industry to double the performance and functionality of digital electronics roughly every 2 years within a fixed cost, power and area. Advances in silicon lithography have enabled this exponential miniaturization of electronics, but, as transistors reach atomic scale and fabrication costs continue to rise, the classical technological driver that has underpinned Moore's Law for 50 years is failing and is anticipated to flatten by 2025. This article provides an updated view of what a post-exascale system will look like and the challenges ahead, based on our most recent understanding of technology roadmaps. It also discusses the tapering of historical improvements, and how it affects options available to continue scaling of successors to the first exascale machine. Lastly, this article covers the many different opportunities and strategies available to continue computing performance improvements in the absence of historical technology drivers. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'

    Future Energy Efficient Data Centers With Disaggregated Servers

    Get PDF
    The popularity of the Internet and the demand for 24/7 services uptime is driving system performance and reliability requirements to levels that today's data centers can no longer support. This paper examines the traditional monolithic conventional server (CS) design and compares it to a new design paradigm: the disaggregated server (DS) data center design. The DS design arranges data centers resources in physical pools, such as processing, memory, and IO module pools, rather than packing each subset of such resources into a single server box. In this paper, we study energy efficient resource provisioning and virtual machine (VM) allocation in DS-based data centers compared to CS-based data centers. First, we present our new design for the photonic DS-based data center architecture, supplemented with a complete description of the architectural components. Second, we develop a mixed integer linear programming (MILP) model to optimize VM allocation for the DS-based data center, including the data center communication fabric power consumption. Our results indicate that, in DS data centers, the optimum allocation of pooled resources and their communication power yields up to 42% average savings in total power consumption when compared with the CS approach. Due to the MILP high computational complexity, we developed an energy efficient resource provisioning heuristic for DS with communication fabric (EERP-DSCF), based on the MILP model insights, with comparable power efficiency to the MILP model. With EERP-DSCF, we can extend the number of served VMs, where the MILP model scalability for a large number of VMs is challenging. Furthermore, we assess the energy efficiency of the DS design under stringent conditions by increasing the CPU to memory traffic and by including high noncommunication power consumption to determine the conditions at which the DS and CS designs become comparable in power consumption. Finally, we present a complete analysis of the communication patterns in our new DS design and some recommendations for design and implementation challenges

    Energy Efficient Disaggregated Servers for Future Data Centers

    Get PDF
    With the dawn of cloud computing, data centers’ power consumption has received increased attention. In this paper we evaluate the energy efficiency potential of exploiting the concept of Disaggregated Server (DS) design in data centers for efficient resource provisioning. A DS, is a new approach for future racks where servers are disaggregated and resources, such as processors, memory and IO ports are arranged in resource pools constructing processing pools, memory pools and IO pools. We developed a mixed integer linear programming (MILP) model for energy minimization of the virtual machine (VM) placement problem in data centres implementing DS approach. The results show that the average power savings are up to 49% for the different VM types considered

    A Scalable Telemetry Framework for Zero Touch Optical Network Management

    Get PDF
    The interest about Zero Touch Network and Service Management (ZSM) is rapidly emerging. As defined by ETSI, the ZSM architecture is based on a closed-loop/feedback control of the network and the services. Such closed-loop control can be based on the Boyd's Observe Orient Decide and Act (OODA) loop that matches some specific management functions such as Data Collection, Data Analytics, Intelligence, Orchestration and Control. An efficient implementation of such control loop allows the network to timely adapt to changes and maintain the required quality of service.Many solutions for collecting network parameters (i.e., implementing ZSM data collection) are proposed that fall under the broad umbrella of network telemetry. An example is the Google gRPC, that represented one of the first solutions to provide a framework for data collection. Since then, the number of available frameworks is proliferating. In this paper we propose the utilisation of Apache Kafka as a framework for collecting optical network parameters. Then, the paper goes beyond that by proposing and showing how Apache Kafka can be effective for supporting data exchange and management of whole ZSM closed-loop.Experimental evaluation results show that, even when a large number of data are collected, the solution is scalable and the time to disseminate the parameter values is short. Indeed, the difference between the reception time and the generation time of data is, on average, 40-50ms when about four thousand messages are generated
    • …
    corecore