26,872 research outputs found

    Dynamic Bandwidth Allocation in Heterogeneous OFDMA-PONs Featuring Intelligent LTE-A Traffic Queuing

    Get PDF
    This work was supported by the ACCORDANCE project, through the 7th ICT Framework Programme. This is an Accepted Manuscript of an article accepted for publication in Journal of Lightwave Technology following peer review. © 2014 IEEE Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.A heterogeneous, optical/wireless dynamic bandwidth allocation framework is presented, exhibiting intelligent traffic queuing for practically controlling the quality-of-service (QoS) of mobile traffic, backhauled via orthogonal frequency division multiple access–PON (OFDMA-PON) networks. A converged data link layer is presented between long term evolution-advanced (LTE-A) and next-generation passive optical network (NGPON) topologies, extending beyond NGPON2. This is achieved by incorporating in a new protocol design, consistent mapping of LTE-A QCIs and OFDMA-PON queues. Novel inter-ONU algorithms have been developed, based on the distribution of weights to allocate subcarriers to both enhanced node B/optical network units (eNB/ONUs) and residential ONUs, sharing the same infrastructure. A weighted, intra-ONU scheduling mechanism is also introduced to control further the QoS across the network load. The inter and intra-ONU algorithms are both dynamic and adaptive, providing customized solutions to bandwidth allocation for different priority queues at different network traffic loads exhibiting practical fairness in bandwidth distribution. Therefore, middle and low priority packets are not unjustifiably deprived in favor of high priority packets at low network traffic loads. Still the protocol adaptability allows the high priority queues to automatically over perform when the traffic load has increased and the available bandwidth needs to be rationally redistributed. Computer simulations have confirmed that following the application of adaptive weights the fairness index of the new scheme (representing the achieved throughput for each queue), has improved across the traffic load to above 0.9. Packet delay reduction of more than 40ms has been recorded as a result for the low priority queues, while high priories still achieve sufficiently low packet delays in the range of 20 to 30msPeer reviewe

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Improvement of indoor VLC network downlink scheduling and resource allocation

    Get PDF
    Indoor visible light communications (VLC) combines illumination and communication by utilizing the high-modulation-speed of LEDs. VLC is anticipated to be complementary to radio frequency communications and an important part of next generation heterogeneous networks. In order to make the maximum use of VLC technology in a networking environment, we need to expand existing research from studies of traditional point-to-point links to encompass scheduling and resource allocation related to multi-user scenarios. This work aims to maximize the downlink throughput of an indoor VLC network, while taking both user fairness and time latency into consideration. Inter-user interference is eliminated by appropriately allocating LEDs to users with the aid of graph theory. A three-term priority factor model is derived and is shown to improve the throughput performance of the network scheduling scheme over those previously reported. Simulations of VLC downlink scheduling have been performed under proportional fairness scheduling principles where our newly formulated priority factor model has been applied. The downlink throughput is improved by 19.6% compared to previous two-term priority models, while achieving similar fairness and latency performance. When the number of users grows larger, the three-term priority model indicates an improvement in Fairness performance compared to two-term priority model scheduling

    Power efficient dynamic resource scheduling algorithms for LTE

    Get PDF

    Scheduling Policies in Time and Frequency Domains for LTE Downlink Channel: A Performance Comparison

    Get PDF
    A key feature of the Long-Term Evolution (LTE) system is that the packet scheduler can make use of the channel quality information (CQI), which is periodically reported by user equipment either in an aggregate form for the whole downlink channel or distinguished for each available subchannel. This mechanism allows for wide discretion in resource allocation, thus promoting the flourishing of several scheduling algorithms, with different purposes. It is therefore of great interest to compare the performance of such algorithms under different scenarios. Here, we carry out a thorough performance analysis of different scheduling algorithms for saturated User Datagram Protocol (UDP) and Transmission Control Protocol (TCP) traffic sources, as well as consider both the time- and frequency-domain versions of the schedulers and for both flat and frequency-selective channels. The analysis makes it possible to appreciate the difference among the scheduling algorithms and to assess the performance gain, in terms of cell capacity, users' fairness, and packet service time, obtained by exploiting the richer, but heavier, information carried by subchannel CQI. An important part of this analysis is a throughput guarantee scheduler, which we propose in this paper. The analysis reveals that the proposed scheduler provides a good tradeoff between cell capacity and fairness both for TCP and UDP traffic sources

    Enabling preemptive multiprogramming on GPUs

    Get PDF
    GPUs are being increasingly adopted as compute accelerators in many domains, spanning environments from mobile systems to cloud computing. These systems are usually running multiple applications, from one or several users. However GPUs do not provide the support for resource sharing traditionally expected in these scenarios. Thus, such systems are unable to provide key multiprogrammed workload requirements, such as responsiveness, fairness or quality of service. In this paper, we propose a set of hardware extensions that allow GPUs to efficiently support multiprogrammed GPU workloads. We argue for preemptive multitasking and design two preemption mechanisms that can be used to implement GPU scheduling policies. We extend the architecture to allow concurrent execution of GPU kernels from different user processes and implement a scheduling policy that dynamically distributes the GPU cores among concurrently running kernels, according to their priorities. We extend the NVIDIA GK110 (Kepler) like GPU architecture with our proposals and evaluate them on a set of multiprogrammed workloads with up to eight concurrent processes. Our proposals improve execution time of high-priority processes by 15.6x, the average application turnaround time between 1.5x to 2x, and system fairness up to 3.4x.We would like to thank the anonymous reviewers, Alexan- der Veidenbaum, Carlos Villavieja, Lluis Vilanova, Lluc Al- varez, and Marc Jorda on their comments and help improving our work and this paper. This work is supported by Euro- pean Commission through TERAFLUX (FP7-249013), Mont- Blanc (FP7-288777), and RoMoL (GA-321253) projects, NVIDIA through the CUDA Center of Excellence program, Spanish Government through Programa Severo Ochoa (SEV-2011-0067) and Spanish Ministry of Science and Technology through TIN2007-60625 and TIN2012-34557 projects.Peer ReviewedPostprint (author’s final draft
    • 

    corecore