1,128 research outputs found
Comparative Study Of Congestion Control Techniques In High Speed Networks
Congestion in network occurs due to exceed in aggregate demand as compared to
the accessible capacity of the resources. Network congestion will increase as
network speed increases and new effective congestion control methods are
needed, especially to handle bursty traffic of todays very high speed networks.
Since late 90s numerous schemes i.e. [1]...[10] etc. have been proposed. This
paper concentrates on comparative study of the different congestion control
schemes based on some key performance metrics. An effort has been made to judge
the performance of Maximum Entropy (ME) based solution for a steady state
GE/GE/1/N censored queues with partial buffer sharing scheme against these key
performance metrics.Comment: 10 pages IEEE format, International Journal of Computer Science and
Information Security, IJCSIS November 2009, ISSN 1947 5500,
http://sites.google.com/site/ijcsis
A practical approach to network-based processing
The usage of general-purpose processors externally attached to routers to play virtually the role of active coprocessors seems a safe and cost-effective approach to add active network capabilities to existing routers. This paper reviews this router-assistant way of making active nodes, addresses the benefits and limitations of this technique, and describes a new platform based on it using an enhanced commercial router. The features new to this type of architecture are transparency, IPv4 and IPv6 support, and full control over layer 3 and above. A practical experience with two applications for path characterization and a transport gateway managing multi-QoS is described.Most of this work has been funded by the IST project GCAP (Global Communication Architecture and Protocols for new QoS services over IPv6 networks) IST-1999-10 504. Further development and application to practical scenarios is being supported by IST project Opium (Open Platform for Integration of UMTS Middleware) IST-2001-36063 and the Spanish MCYT under projects TEL99-0988-C02-01 and AURAS TIC2001-1650-C02-01.Publicad
ABE: providing a low-delay service within best effort
Alternative best effort (ABE) is a novel service for IP networks, which relies on the idea of providing low delay at the expense of possibly less throughput. The objective is to retain the simplicity of the original Internet single-class best-effort service while providing low delay to interactive adaptive applications
User-Centric Quality of Service Provisioning in IP Networks
The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services.
The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness.
This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services.
The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco
Methods of Congestion Control for Adaptive Continuous Media
Since the first exchange of data between machines in different locations in early 1960s,
computer networks have grown exponentially with millions of people now using the
Internet. With this, there has also been a rapid increase in different kinds of services offered
over the World Wide Web from simple e-mails to streaming video. It is generally accepted
that the commonly used protocol suite TCP/IP alone is not adequate for a number of
modern applications with high bandwidth and minimal delay requirements. Many
technologies are emerging such as IPv6, Diffserv, Intserv etc, which aim to replace the onesize-fits-all approach of the current lPv4. There is a consensus that the networks will have
to be capable of multi-service and will have to isolate different classes of traffic through
bandwidth partitioning such that, for example, low priority best-effort traffic does not cause
delay for high priority video traffic. However, this research identifies that even within a
class there may be delays or losses due to congestion and the problem will require different
solutions in different classes.
The focus of this research is on the requirements of the adaptive continuous media
class. These are traffic flows that require a good Quality of Service but are also able to
adapt to the network conditions by accepting some degradation in quality. It is potentially
the most flexible traffic class and therefore, one of the most useful types for an increasing
number of applications.
This thesis discusses the QoS requirements of adaptive continuous media and
identifies an ideal feedback based control system that would be suitable for this class. A
number of current methods of congestion control have been investigated and two methods
that have been shown to be successful with data traffic have been evaluated to ascertain if
they could be adapted for adaptive continuous media. A novel method of control based on
percentile monitoring of the queue occupancy is then proposed and developed. Simulation
results demonstrate that the percentile monitoring based method is more appropriate to this
type of flow. The problem of congestion control at aggregating nodes of the network
hierarchy, where thousands of adaptive flows may be aggregated to a single flow, is then
considered. A unique method of pricing mean and variance is developed such that each
individual flow is charged fairly for its contribution to the congestion
- …