7,526 research outputs found

    Datacenter Traffic Control: Understanding Techniques and Trade-offs

    Get PDF
    Datacenters provide cost-effective and flexible access to scalable compute and storage resources necessary for today's cloud computing needs. A typical datacenter is made up of thousands of servers connected with a large network and usually managed by one operator. To provide quality access to the variety of applications and services hosted on datacenters and maximize performance, it deems necessary to use datacenter networks effectively and efficiently. Datacenter traffic is often a mix of several classes with different priorities and requirements. This includes user-generated interactive traffic, traffic with deadlines, and long-running traffic. To this end, custom transport protocols and traffic management techniques have been developed to improve datacenter network performance. In this tutorial paper, we review the general architecture of datacenter networks, various topologies proposed for them, their traffic properties, general traffic control challenges in datacenters and general traffic control objectives. The purpose of this paper is to bring out the important characteristics of traffic control in datacenters and not to survey all existing solutions (as it is virtually impossible due to massive body of existing research). We hope to provide readers with a wide range of options and factors while considering a variety of traffic control mechanisms. We discuss various characteristics of datacenter traffic control including management schemes, transmission control, traffic shaping, prioritization, load balancing, multipathing, and traffic scheduling. Next, we point to several open challenges as well as new and interesting networking paradigms. At the end of this paper, we briefly review inter-datacenter networks that connect geographically dispersed datacenters which have been receiving increasing attention recently and pose interesting and novel research problems.Comment: Accepted for Publication in IEEE Communications Surveys and Tutorial

    Traffic-splitting networks operating under alpha-fair sharing policies and balanced fairness

    Get PDF
    We consider a data network in which, besides classes of users that use specific routes, one class of users can split its traffic over several routes. We consider load balancing at the packet-level, implying that traffic of this class of users can be divided among several routes at the same time. Assuming that load balancing is based on an alpha-fair sharing policy, we show that the network has multiple possible behaviors. In particular, we show that some classes of users, depending on the state of the network, share capacity according to some Discriminatory Processor Sharing (DPS) model, whereas each of the remaining classes of users behaves as in a single-class single-node model. We compare the performance of this network with that of a similar network, where packet-level load balancing is based on balanced fairness. We derive explicit expressions for the mean number of users under balanced fairness, and show by conducting extensive simulation experiments that these provide accurate approximations for the ones under alpha-fair sharing

    Improving software middleboxes and datacenter task schedulers

    Get PDF
    Over the last decades, shared systems have contributed to the popularity of many technologies. From Operating Systems to the Internet, they have all brought significant cost savings by allowing the underlying infrastructure to be shared. A common challenge in these systems is to ensure that resources are fairly divided without compromising utilization efficiency. In this thesis, we look at problems in two shared systems—software middleboxes and datacenter task schedulers—and propose ways of improving both efficiency and fairness. We begin by presenting Sprayer, a system that uses packet spraying to load balance packets to cores in software middleboxes. Sprayer eliminates the imbalance problems of per-flow solutions and addresses the new challenges of handling shared flow state that come with packet spraying. We show that Sprayer significantly improves fairness and seamlessly uses the entire capacity, even when there is a single flow in the system. After that, we present Stateful Dominant Resource Fairness (SDRF), a task scheduling policy for datacenters that looks at past allocations and enforces fairness in the long run. We prove that SDRF keeps the fundamental properties of DRF—the allocation policy it is built on—while benefiting users with lower usage. To efficiently implement SDRF, we also introduce live tree, a general-purpose data structure that keeps elements with predictable time-varying priorities sorted. Our trace-driven simulations indicate that SDRF reduces users’ waiting time on average. This improves fairness, by increasing the number of completed tasks for users with lower demands, with small impact on high-demand users.Nas Ășltimas dĂ©cadas, sistemas compartilhados contribuĂ­ram para a popularidade de muitas tecnologias. Desde Sistemas Operacionais atĂ© a Internet, esses sistemas trouxeram economias significativas ao permitir que a infraestrutura subjacente fosse compartilhada. Um desafio comum a esses sistemas Ă© garantir que os recursos sejam divididos de forma justa, sem comprometer a eficiĂȘncia de utilização. Esta dissertação observa problemas em dois sistemas compartilhados distintos—middleboxes em software e escalonadores de tarefas de datacenters—e propĂ”e maneiras de melhorar tanto a eficiĂȘncia como a justiça. Primeiro Ă© apresentado o sistema Sprayer, que usa espalhamento para direcionar pacotes entre os nĂșcleos em middleboxes em software. O Sprayer elimina os problemas de desbalanceamento causados pelas soluçÔes baseadas em fluxos e lida com os novos desafios de manipular estados de fluxo, consequentes do espalhamento de pacotes. É mostrado que o Sprayer melhora a justiça de forma significativa e consegue usar toda a capacidade, mesmo quando hĂĄ apenas um fluxo no sistema. Depois disso, Ă© apresentado o SDRF, uma polĂ­tica de alocação de tarefas para datacenters que considera as alocaçÔes passadas e garante justiça ao longo do tempo. Prova-se que o SDRF mantĂ©m as propriedades fundamentais do DRF—a polĂ­tica de alocação em que ele se baseia—enquanto beneficia os usuĂĄrios com menor utilização. Para implementar o SDRF de forma eficiente, tambĂ©m Ă© introduzida a ĂĄrvore viva, uma estrutura de dados genĂ©rica que mantĂ©m ordenados elementos cujas prioridades variam com o tempo. SimulaçÔes com dados reais indicam que o SDRF reduz o tempo de espera na mĂ©dia. Isso melhora a justiça, ao aumentar o nĂșmero de tarefas completas dos usuĂĄrios com menor demanda, tendo um impacto pequeno nos usuĂĄrios de maior demanda

    A Priority-based Fair Queuing (PFQ) Model for Wireless Healthcare System

    Get PDF
    Healthcare is a very active research area, primarily due to the increase in the elderly population that leads to increasing number of emergency situations that require urgent actions. In recent years some of wireless networked medical devices were equipped with different sensors to measure and report on vital signs of patient remotely. The most important sensors are Heart Beat Rate (ECG), Pressure and Glucose sensors. However, the strict requirements and real-time nature of medical applications dictate the extreme importance and need for appropriate Quality of Service (QoS), fast and accurate delivery of a patient’s measurements in reliable e-Health ecosystem. As the elderly age and older adult population is increasing (65 years and above) due to the advancement in medicine and medical care in the last two decades; high QoS and reliable e-health ecosystem has become a major challenge in Healthcare especially for patients who require continuous monitoring and attention. Nevertheless, predictions have indicated that elderly population will be approximately 2 billion in developing countries by 2050 where availability of medical staff shall be unable to cope with this growth and emergency cases that need immediate intervention. On the other side, limitations in communication networks capacity, congestions and the humongous increase of devices, applications and IOT using the available communication networks add extra layer of challenges on E-health ecosystem such as time constraints, quality of measurements and signals reaching healthcare centres. Hence this research has tackled the delay and jitter parameters in E-health M2M wireless communication and succeeded in reducing them in comparison to current available models. The novelty of this research has succeeded in developing a new Priority Queuing model ‘’Priority Based-Fair Queuing’’ (PFQ) where a new priority level and concept of ‘’Patient’s Health Record’’ (PHR) has been developed and integrated with the Priority Parameters (PP) values of each sensor to add a second level of priority. The results and data analysis performed on the PFQ model under different scenarios simulating real M2M E-health environment have revealed that the PFQ has outperformed the results obtained from simulating the widely used current models such as First in First Out (FIFO) and Weight Fair Queuing (WFQ). PFQ model has improved transmission of ECG sensor data by decreasing delay and jitter in emergency cases by 83.32% and 75.88% respectively in comparison to FIFO and 46.65% and 60.13% with respect to WFQ model. Similarly, in pressure sensor the improvements were 82.41% and 71.5% and 68.43% and 73.36% in comparison to FIFO and WFQ respectively. Data transmission were also improved in the Glucose sensor by 80.85% and 64.7% and 92.1% and 83.17% in comparison to FIFO and WFQ respectively. However, non-emergency cases data transmission using PFQ model was negatively impacted and scored higher rates than FIFO and WFQ since PFQ tends to give higher priority to emergency cases. Thus, a derivative from the PFQ model has been developed to create a new version namely “Priority Based-Fair Queuing-Tolerated Delay” (PFQ-TD) to balance the data transmission between emergency and non-emergency cases where tolerated delay in emergency cases has been considered. PFQ-TD has succeeded in balancing fairly this issue and reducing the total average delay and jitter of emergency and non-emergency cases in all sensors and keep them within the acceptable allowable standards. PFQ-TD has improved the overall average delay and jitter in emergency and non-emergency cases among all sensors by 41% and 84% respectively in comparison to PFQ model
    • 

    corecore