27,777 research outputs found
Comparative Study Of Congestion Control Techniques In High Speed Networks
Congestion in network occurs due to exceed in aggregate demand as compared to
the accessible capacity of the resources. Network congestion will increase as
network speed increases and new effective congestion control methods are
needed, especially to handle bursty traffic of todays very high speed networks.
Since late 90s numerous schemes i.e. [1]...[10] etc. have been proposed. This
paper concentrates on comparative study of the different congestion control
schemes based on some key performance metrics. An effort has been made to judge
the performance of Maximum Entropy (ME) based solution for a steady state
GE/GE/1/N censored queues with partial buffer sharing scheme against these key
performance metrics.Comment: 10 pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS November 2009, ISSN 1947 5500,
http://sites.google.com/site/ijcsis
Optimised Traffic Flow at a Single Intersection: Traffic Responsive signalisation
We propose a stochastic model for the intersection of two urban streets. The
vehicular traffic at the intersection is controlled by a set of traffic lights
which can be operated subject to fix-time as well as traffic adaptive schemes.
Vehicular dynamics is simulated within the framework of the probabilistic
cellular automata and the delay experienced by the traffic at each individual
street is evaluated for specified time intervals. Minimising the total delay of
both streets gives rise to the optimum signalisation of traffic lights. We
propose some traffic responsive signalisation algorithms which are based on the
concept of cut-off queue length and cut-off density.Comment: 10 pages, 11 eps figs, to appear in J. Phys.
Controlling Network Latency in Mixed Hadoop Clusters: Do We Need Active Queue Management?
With the advent of big data, data center applications are processing vast amounts of unstructured and semi-structured data, in parallel on large clusters, across hundreds to thousands of nodes. The highest performance for these batch big data workloads is achieved using expensive network equipment with large buffers, which accommodate bursts in network traffic and allocate bandwidth fairly even when the network is congested. Throughput-sensitive big data applications are, however, often executed in the same data center as latency-sensitive workloads. For both workloads to be supported well, the network must provide both maximum throughput and low latency. Progress has been made in this direction, as modern network switches support Active Queue Management (AQM) and Explicit Congestion Notifications (ECN), both mechanisms to control the level of queue occupancy, reducing the total network latency. This paper is the first study of the effect of Active Queue Management on both throughput and latency, in the context of Hadoop and the MapReduce programming model. We give a quantitative comparison of four different approaches for controlling buffer occupancy and latency: RED and CoDel, both standalone and also combined with ECN and DCTCP network protocol, and identify the AQM configurations that maintain Hadoop execution time gains from larger buffers within 5%, while reducing network packet latency caused by bufferbloat by up to 85%. Finally, we provide recommendations to administrators of Hadoop clusters as to how to improve latency without degrading the throughput of batch big data workloads.The research leading to these results has received funding from the European Unions Seventh Framework Programme (FP7/2007–2013) under grant agreement number 610456 (Euroserver).
The research was also supported by the Ministry of Economy and Competitiveness of Spain under the contracts TIN2012-34557 and TIN2015-65316-P, Generalitat de Catalunya (contracts 2014-SGR-1051 and 2014-SGR-1272), HiPEAC-3 Network of Excellence (ICT- 287759), and the Severo Ochoa Program (SEV-2011-00067) of the Spanish Government.Peer ReviewedPostprint (author's final draft
Recommended from our members
Apparatus and method for congestion control in high speed networks
An adjustable bit rate (ABR) feedback control scheme is provided where the effects of multiloop delays and high priority traffic transmission are built into the control model. The data traffic is filtered by a low pass filter. Then, the low frequency bandwidth of the filtered traffic is measured and compared to a predetermined threshold. If the measured value exceeds the threshold, the ABR traffic flow is reduced. If the measured value is less than the threshold, the ABR traffic flow is increased. In addition, a General Prediction Control (GPC) method may be applied to the control model for optimal performance. An object of the invention is to minimize the unused link capacity subject to no congestion, where the ABR traffic is adapted to the low frequency variation of high priority traffic flow for high efficiency.Board of Regents, University of Texas Syste
Intelligent Controlling Simulation of Traffic Flow in a Small City Network
We propose a two dimensional probabilistic cellular automata for the
description of traffic flow in a small city network composed of two
intersections. The traffic in the network is controlled by a set of traffic
lights which can be operated both in fixed-time and a traffic responsive
manner. Vehicular dynamics is simulated and the total delay experienced by the
traffic is evaluated within specified time intervals. We investigate both
decentralized and centralized traffic responsive schemes and in particular
discuss the implementation of the {\it green-wave} strategy. Our investigations
prove that the network delay strongly depends on the signalisation strategy. We
show that in some traffic conditions, the application of the green-wave scheme
may destructively lead to the increment of the global delay.Comment: 8 pages, 10 eps figures, Revte
Controlling delay differentiation with priority jumps: analytical study
Supporting different services with different Quality of Service (QoS) requirements is not an easy task in modern telecommunication systems: an efficient priority scheduling discipline is of great importance. Fixed or static priority achieves maximal delay differentiation between different types of traffic, but may have a too severe impact on the performance of lower-priority traffic. In this paper, we propose a priority scheduling discipline with priority jumps to control the delay differentiation. In this scheduling discipline, packets can be promoted to a higher priority level in the course of time. We use probability generating functions to study the queueing system analytically. Some interesting mathematical challenges thereby arise. With some numerical examples, we finally show the impact of the priority jumps and of the system parameters
- …