1,456 research outputs found

    Power Management Strategies for Wired Communication Networks.

    Get PDF
    With the exponential traffic growth and the rapid expansion of communication infrastructures worldwide, energy expenditure of the Internet has become a major concern in IT-reliant society. This energy problem has motivated the urgent demands of new strategies to reduce the consumption of telecommunication networks, with a particular focus on IP networks. In addition to the development of a new generation of energy-efficient network equipment, a significant body of research has concentrated on incorporating power/energy-awareness into network control and management, which aims at reducing the network power/energy consumption by either dynamically scaling speeds of each active network component to make it capable of adapting to its current load or putting to sleep the lightly loaded network elements and reconfiguring the network. However, the fundamental challenge of greening the Internet is to achieve a balance between the power/energy saving and the demands of quality-of-service (QoS) performance, which is an issue that has received less attention but is becoming a major problem in future green network designs. In this dissertation, we study how energy consumption can be reduced through different power/energy- and QoS-aware strategies for wired communication networks. To sufficiently reduce energy consumption while meeting the desire QoS requirements, we introduce several different schemes combing power management techniques with different scheduling strategies, which can be classified into experimental power management (EPM) and algorithmic power management (APM). In these proposed schemes, the power management techniques that we focus on are speed scaling and sleep mode. When the network processor is active, its speed and supply voltage can be decreased to reduce the energy consumption (speed scaling), while when the processor is idle, it can be put in a low power mode to save the energy consumption (sleep mode). The resulting problem is to determine how and when to adjust speeds for the processors, and/or to put a device into sleep mode. In this dissertation, we first discuss three families of dynamic voltage/frequency scaling (DVFS) based, QoS-aware EPM schemes, which aim to reduce the energy consumption in network equipment by using different packet scheduling strategies, while adhering to QoS requirements of supported applications. Then, we explore the problem of energy minimization under QoS constraints through a mathematical programming model, which is a DVFS-based, delay-aware APM scheme combing the speed scaling technique with the existing rate monotonic scheduling policy. Among these speed scaling based schemes, up to 26.76% dynamic power saving of the total power consumption can be achieved. In addition to speed scaling approaches, we further propose a sleep-based, traffic-aware EPM scheme, which is used to reduce power consumption by greening routing light load and putting the related network equipment into sleep mode according to twelve flow traffic density changes in 24-hour of an arbitrarily selected day. Meanwhile, a speed scaling technique without violating network QoS performance is also considered in this scheme when the traffic is rerouted. Applying this sleep-based strategy can lead to power savings of up to 62.58% of the total power consumption

    Theories and Models for Internet Quality of Service

    Get PDF
    We survey recent advances in theories and models for Internet Quality of Service (QoS). We start with the theory of network calculus, which lays the foundation for support of deterministic performance guarantees in networks, and illustrate its applications to integrated services, differentiated services, and streaming media playback delays. We also present mechanisms and architecture for scalable support of guaranteed services in the Internet, based on the concept of a stateless core. Methods for scalable control operations are also briefly discussed. We then turn our attention to statistical performance guarantees, and describe several new probabilistic results that can be used for a statistical dimensioning of differentiated services. Lastly, we review recent proposals and results in supporting performance guarantees in a best effort context. These include models for elastic throughput guarantees based on TCP performance modeling, techniques for some quality of service differentiation without access control, and methods that allow an application to control the performance it receives, in the absence of network support

    Advances in Internet Quality of Service

    Get PDF
    We describe recent advances in theories and architecture that support performance guarantees needed for quality of service networks. We start with deterministic computations and give applications to integrated services, differentiated services, and playback delays. We review the methods used for obtaining a scalable integrated services support, based on the concept of a stateless core. New probabilistic results that can be used for a statistical dimensioning of differentiated services are explained; some are based on classical queuing theory, while others capitalize on the deterministic results. Then we discuss performance guarantees in a best effort context; we review: methods to provide some quality of service in a pure best effort environment; methods to provide some quality of service differentiation without access control, and methods that allow an application to control the performance it receives, in the absence of network support

    Providing guaranteed QoS in the hose-modeled VPN

    Get PDF
    With the development of the Internet, Internet service providers (ISPs) are required to offer revenue-generating and value-added services instead of only providing bandwidth and access services. Virtual Private Network (VPN) is one of the most important value-added services for ISPs. The classical VPN service is provided by implementing layer 2 technologies, either Frame Relay (FR) or Asynchronous Transfer Mode (ATM). With FR or ATM, virtual circuits are created before data delivery. Since the bandwidth and buffers are reserved, the QoS requirements can be naturally guaranteed. In the past few years, layer 3 VPN technologies are widely deployed due to the desirable performance in terms of flexibility, scalability and simplicity. Layer 3 VPNs are built upon IP tunnels, e.g., by using PPTP, L2TP or IPSec. Since IP is best-of-effort in nature, the QoS requirement cannot be guaranteed in layer 3 VPNs. Actually, layer 3 VPN service can only provide secure connectivity, i.e., protecting and authenticating IP packets between gateways or hosts in a VPN. Without doubt, with more applications on voice, audio and video being used in the Internet, the provision of QoS is one of the most important parts of the emerging services provided by ISPs. An intriguing question is: Is it possible to obtain the best of both layer 2 and 3 VPN? Is it possible to provide guaranteed or predictable QoS, as in layer 2 VPNs, while maintaining the flexibility and simplicity in layer 3 VPN? This question is the starting point of this study. The recently proposed hose model for VPN possesses desirable properties in terms of flexibility, scalability and multiplexing gain. However, the classic fair bandwidth allocation schemes and weighted fair queuing schemes raise the issue of low overall utilization in this model. A new fluid model for provider-provisioned virtual private network (PPVPN) is proposed in this dissertation. Based on the proposed model, an idealized fluid bandwidth allocation scheme is developed. This scheme is proven, analytically, to have the following properties: 1) maximize the overall throughput of the VPN without compromising fairness; 2) provide a mechanism that enables the VPN customers to allocate the bandwidth according to their requirements by assigning different weights to different hose flows, and thus obtain the predictable QoS performance; and 3) improve the overall throughput of the ISPs\u27 network. To approximate the idealized fluid scheme in the real world, the 2-dimensional deficit round robin (2-D DRR and 2-D DRR+) schemes are proposed. The integration of the proposed schemes with the best-effort traffic within the framework of virtual-router-based VPN is also investigated. The 2-D DRR and 2-D DER-+ schemes can be extended to multi-dimensional schemes to be employed in those applications which require a hierarchical scheduling architecture. To enhance the scalability, a more scalable non-per-flow-based scheme for output queued switches is developed as well, and the integration of this scheme within the framework of the MPLS VPN and applications for multicasting traffics is discussed. The performance and properties of these schemes are analyzed

    Towards Terabit Carrier Ethernet and Energy Efficient Optical Transport Networks

    Get PDF

    WING/WORLD: An Open Experimental Toolkit for the Design and Deployment of IEEE 802.11-Based Wireless Mesh Networks Testbeds

    Get PDF
    Wireless Mesh Networks represent an interesting instance of light-infrastructure wireless networks. Due to their flexibility and resiliency to network failures, wireless mesh networks are particularly suitable for incremental and rapid deployments of wireless access networks in both metropolitan and rural areas. This paper illustrates the design and development of an open toolkit aimed at supporting the design of different solutions for wireless mesh networking by enabling real evaluation, validation, and demonstration. The resulting testbed is based on off-the-shelf hardware components and open-source software and is focused on IEEE 802.11 commodity devices. The software toolkit is based on an "open" philosophy and aims at providing the scientific community with a tool for effective and reproducible performance analysis of WMNs. The paper describes the architecture of the toolkit, and its core functionalities, as well as its potential evolutions

    Design of traffic shaper / scheduler for packet switches and DiffServ networks : algorithms and architectures

    Get PDF
    The convergence of communications, information, commerce and computing are creating a significant demand and opportunity for multimedia and multi-class communication services. In such environments, controlling the network behavior and guaranteeing the user\u27s quality of service is required. A flexible hierarchical sorting architecture which can function either as a traffic shaper or a scheduler according to the requirement of the traffic load is presented to meet the requirement. The core structure can be implemented as a hierarchical traffic shaper which can support a large number of connections with a wide variety of rates and burstiness without the loss of the granularity in cells\u27 conforming departure time. The hierarchical traffic shaper can implement the exact sorting scheme with a substantial reduced memory size by using two stages of timing queues, and with substantial reduction in complexity, without introducing any sorting inaccuracy. By setting a suitable threshold to the length of the departure queue and using a lookahead algorithm, the core structure can be converted to a hierarchical rateadaptive scheduler. Based on the traffic load, it can work as an exact sorting traffic shaper or a Generic Cell Rate Algorithm (GCRA) scheduler. Such a rate-adaptive scheduler can reduce the Cell Transfer Delay and the Maximum Memory Occupancy greatly while keeping the fairness in the bandwidth assignment which is the inherent characteristic of GCRA. By introducing a best-effort queue to accommodate besteffort traffic, the hierarchical sorting architecture can be changed to a near workconserving scheduler. It assigns remaining bandwidth to the best-effort traffic so that it improves the utilization, of the outlink while it guarantees the quality of service requirements of those services which require quality of service guarantees. The inherent flexibility of the hierarchical sorting architecture combined with intelligent algorithms determines its multiple functions. Its implementation not only can manage buffer and bandwidth resources effectively, but also does not require no more than off-the-shelf hardware technology. The correlation of the extra shaping delay and the rate of the connections is revealed, and an improved fair traffic shaping algorithm, Departure Event Driven plus Completing Service Time Resorting algorithm, is presented. The proposed algorithm introduces a resorting process into Departure Event Driven Traffic Shaping Algorithm to resolve the contention of multiple cells which are all eligible for transmission in the traffic shaper. By using the resorting process based on each connection\u27s rate, better fairness and flexibility in the bandwidth assignment for connections with wide range of rates can be given. A Dual Level Leaky Bucket Traffic Shaper(DLLBTS) architecture is proposed to be implemented at the edge nodes of Differentiated Services Networks in order to facilitate the quality of service management process. The proposed architecture can guarantee not only the class-based Service Level Agreement, but also the fair resource sharing among flows belonging to the same class. A simplified DLLBTS architecture is also given, which can achieve the goals of DLLBTS while maintain a very low implementation complexity so that it can be implemented with the current VLSI technology. In summary, the shaping and scheduling algorithms in the high speed packet switches and DiffServ networks are studied, and the intelligent implementation schemes are proposed for them

    A Survey on the Contributions of Software-Defined Networking to Traffic Engineering

    Get PDF
    Since the appearance of OpenFlow back in 2008, software-defined networking (SDN) has gained momentum. Although there are some discrepancies between the standards developing organizations working with SDN about what SDN is and how it is defined, they all outline traffic engineering (TE) as a key application. One of the most common objectives of TE is the congestion minimization, where techniques such as traffic splitting among multiple paths or advanced reservation systems are used. In such a scenario, this manuscript surveys the role of a comprehensive list of SDN protocols in TE solutions, in order to assess how these protocols can benefit TE. The SDN protocols have been categorized using the SDN architecture proposed by the open networking foundation, which differentiates among data-controller plane interfaces, application-controller plane interfaces, and management interfaces, in order to state how the interface type in which they operate influences TE. In addition, the impact of the SDN protocols on TE has been evaluated by comparing them with the path computation element (PCE)-based architecture. The PCE-based architecture has been selected to measure the impact of SDN on TE because it is the most novel TE architecture until the date, and because it already defines a set of metrics to measure the performance of TE solutions. We conclude that using the three types of interfaces simultaneously will result in more powerful and enhanced TE solutions, since they benefit TE in complementary ways.European Commission through the Horizon 2020 Research and Innovation Programme (GN4) under Grant 691567 Spanish Ministry of Economy and Competitiveness under the Secure Deployment of Services Over SDN and NFV-based Networks Project S&NSEC under Grant TEC2013-47960-C4-3-

    System-on-Chip Packet Processor for an Experimental Network Services Platform

    Get PDF
    As the focus of networking research shifts from raw performance to the delivery of advanced network services, there is a growing need for open-platform systems for extensible networking research. The Applied Research Laboratory at Washington University in Saint Louis has developed a flexible Network Services Platform (NSP) to meet this need. The NSP provides an extensible platform for prototyping next-generation network services and applications. This paper describes the design of a system-on-chip Packet Processor for the NSP which performs all core packet processing functions including segmentation and reassembly, packet classification, route lookup, and queue management. Targeted to a commercial configurable logic device, the system is designed to support gigabit links and switch fabrics with a 2:1 speed advantage. We provide resource consumption results for each component of the Packet Processor design
    corecore