979 research outputs found
Design & Performance Study of a Flexible Traffic Shaper for High Speed Networks
In networks supporting distributed multimedia, maximizing bandwidth
utilization and providing performance guarantees are two incompatible
goals. Heterogeneity of the multimedia sources calls for effective
congestion control schemes to satisfy the diverse Quality of Service (QoS)
requirements of each application. These include admission control at
connection set up, traffic control at the source ends and efficient
scheduling schemes at the switches. The emphasis in this paper is on
traffic control at the source ends.
Traffic control schemes have two functional roles. One is traffic
enforcement as a supplement to the admission control policy. The other is
shaping the input traffic so that it becomes amenable to the scheduling
mechanism at the switches for providing the required QoS guarantees.
Studies on bursty sources have shown that burstiness promotes statistical
multiplexing at the cost of possible congestion. Smoothing the traffic
helps in providing guarantees at the cost o f bandwidth utilization. The
need for a flexible scheme which can provide a reasonable compromise
between the utilization and guarantees is imminent.
We present the design and performance study of a flexible traffic shaper
which can adjust the burstiness of input traffic to obtain reasonable
utilization while maintaining statistical service guarantees. The
performance of the traffic shaper for bursty sources is studied using
simulation.
(Also cross-referenced as UMIACS-TR-95-72
Recommended from our members
Survey of traffic control schemes and error control schemes for ATM networks
Among the techniques proposed for B-ISDN transfer mode, ATM concept is considered to be the most promising transfer technique because of its flexibility and efficiency. This paper surveys and reviews a number of topics related to ATM networks. Those topics cover congestion control, provision of multiple classes of traffic, and error control. Due to the nature of ATM networks, those issues are far more challenging than in conventional networks. Sorne of the more promising solutions to those issues are surveyed, and the corresponding results on performance are summarized. Future research problems in ATM protocol aspect are also presented
Efficient memory management in video on demand servers
In this article we present, analyse and evaluate a new memory management technique for video-on-demand servers. Our proposal,
Memory Reservation Per Storage Device (MRPSD), relies on the allocation of a fixed, small number of memory buffers per storage device.
Selecting adequate scheduling algorithms, information storage strategies and admission control mechanisms, we demonstrate that MRPSD is suited for the deterministic service of variable bit rate streams to intolerant clients. MRPSD allows large memory savings compared to traditional memory management techniques, based on the allocation of a certain amount of memory per client served, without a significant performance penaltyPublicad
Recommended from our members
Survey of congestion control techniques for an ATM network
The emerging broadband integrated services digital network is expected to adopt ATM (Asynchronous Transfer Mode) as the transport network. This new network must support several classes of service with varying delay and loss requirements. It must also operate with link speeds in the hundreds of megabits per second and be scalable up to potential link speeds on the order of gigabits per second. The requirements to support multiple services and high speed make the congestion control in an ATM network difficult. This paper reviews sorne of the techniques for prevention and control of congestion in an ATM network
Dynamic bandwidth allocation in ATM networks
Includes bibliographical references.This thesis investigates bandwidth allocation methodologies to transport new emerging bursty traffic types in ATM networks. However, existing ATM traffic management solutions are not readily able to handle the inevitable problem of congestion as result of the bursty traffic from the new emerging services. This research basically addresses bandwidth allocation issues for bursty traffic by proposing and exploring the concept of dynamic bandwidth allocation and comparing it to the traditional static bandwidth allocation schemes
Efficient memory management in VOD disk array servers usingPer-Storage-Device buffering
We present a buffering technique that reduces video-on-demand server memory requirements in more than one order of magnitude. This technique, Per-Storage-Device Buffering (PSDB), is based on the allocation of a fixed number of buffers per storage device, as opposed to existing solutions based on per-stream buffering allocation. The combination of this technique with disk array servers is studied in detail, as well as the influence of Variable Bit Streams. We also present an interleaved data placement strategy, Constant Time Length Declustering, that results in optimal performance in the service of VBR streams. PSDB is evaluated by extensive simulation of a disk array server model that incorporates a simulation based admission test.This research was supported in part by the National R&D Program of Spain, Project Number TIC97-0438.Publicad
Priority Control in ATM Network for Multimedia Services
The communication network of the near future is going to be based on Asynchronous
Transfer Mode (ATM) which has widely been accepted by equipment vendors and
service providers. Statistical multiplexing technique, high transmission speed and
multimedia services render traditional approaches to network protocol and control
ineffective. The ATM technology is tailored to support data, voice and video traffic
using a common 53 byte fixed length cell based format with connection oriented
routing.
Traffic sources in A TM network such as coded video and bulk data
transfer are bursty. These sources generate cells at a near-peak rate during their active
period and generate few cells during relatively long inactive period. Severe network
congestion might occur as a consequence of this dynamic nature of bursty traffic.
Even though Call Admission Control (CAC) is appropriately carried out for deciding
acceptance of a new call, Quality of Service (QOS) may be beyond the requirement
limits as bursty traffic are piled up. So, priority control, in which traffic stream are
classified into several classes according to their QOS requirements and transferred
according to their priorities, becomes an important research issue in ATM network. There are basically two kinds of priority management schemes: time
priority scheme that gives higher priority to services requiring short delay time and
the space priority scheme that gives high priority cells requiring small cell loss ratio.
The possible drawbacks of these time and space priority schemes are the processing
overhead required for monitoring cells for priority change, especially in the case of
time priority schemes. Also, each arriving cell needs to be time stamped. The
drawback of the space priority scheme lies in the fact that buffer management
complexity increases when the buffer size becomes large because cell sequence
preservation requires a more complicated buffer management logic.
In this thesis, a Mixed Priority Queueing or MPQ scheme is proposed
which includes three distinct strategies for priority control method -- buffer
partitioning, allocation of cells into the buffer and service discipline. The MPQ
scheme is, by nature, a non-fixed priority method in which delay times and loss
probabilities of each service class are taken into account and both delay times and
loss probabilities can be controlled with less dependency compared with the fixed
priority method, where priority grant rule is fixed according to the service class, and
the priority is always given to the highest class cell among cells existing in the
buffer. The proposed priority control is executed independently at each switching
node as a local buffer management. Buffer partitioning is applied to overcome the
weakness of the single buffer
Recommended from our members
Performance analysis of error recovery and congestion control in high-speed networks
In the past few years, Broadband Integrated Services Digital Network (B-ISDN) has received increasing attention as a communication architecture capable of supporting multimedia applications. Among the techniques proposed to implement B-ISDN, Asynchronous Transfer Mode (ATM) is considered to be the most promising transfer technique because of its efficiency and flexibility.In ATM networks, the performance bottleneck of the network, which was once the channel transmission speed, is shifted to the processing speed at the network switching nodes and the propagation delay of the channel. This shift is because the high-speed channel increases the ratio of processing time to packet transmission time and also the ratio of propagation delay to packet transmission time. The increased processing overhead makes it difficult to implement hop-by-hop schemes, which may impose prohibitably high processing at each switching node. The increased propagation delay overhead makes traffic control in ATM a challenge since a large number of packets can be in transit between two ATM switching nodes. Because of these fundamental changes, control schemes developed for traditional networks may not perform efficiently, and thus, new network architectures (congestion control schemes, error control schemes, etc.) are required in ATM networks.In this dissertation, we first present an extensive survey of various traffic control schemes and network protocols for ATM networks. In this survey, possible traffic control schemes are examined, and problems of those schemes and their possible solutions are presented. Next, we investigate two key research issues in ATM networks (and other types of high-speed networks): the effects of protocol-processing overhead and the efficiency of traffic control schemes.We first investigate the effects of protocol-processing overhead on the performance of error recovery schemes. Specifically, we investigate the performance trade-offs between link-by-link and edge-to-edge error recovery schemes. Our results show that for a network with high-speed/low-error-rate channels, an edge-to-edge scheme gives a smaller delay than a link-by-link scheme. We then investigate the effectiveness of a priority packet discarding scheme, a congestion control mechanism suitable for high-speed networks. We derive loss probabilities for each stream and investigate the impact of burstiness of traffic streams on the performance of individual streams
- …