1,700 research outputs found
Recommended from our members
Survey of switching techniques in high-speed networks and their performance
One of the most promising approaches for high speed networks for integrated service applications is fast packet switching, or ATM (Asynchronous Transfer Mode). ATM can be characterized by very high speed transmission links and simple, hard wired protocols within a network. To match the transmission speed of the network links, and to minimize the overhead due to the processing of network protocols, the switching of cells is done in hardware switching fabrics in ATM networks.A number of designs has been proposed for implementing ATM switches. While many differences exist among the proposals, the vast majority of them is based on self-routing multi-stage interconnection networks. This is because of the desirable features of multi-stage interconnection networks such as self-routing capability and suitability for VLSI implementation.Existing ATM switch architectures can be classified into two major classes: blocking switches, where blockings of cells may occur within a switch when more than one cell contends for the same internal link, and non-blocking switches, where no internal blocking occurs. A large number of techniques has also been proposed to improve the performance of blocking and nonblocking switches. In this paper, we present an extensive survey of the existing proposals for ATM switch architectures, focusing on their performance issues
Information Switching Processor (ISP) contention analysis and control
Future satellite communications, as a viable means of communications and an alternative to terrestrial networks, demand flexibility and low end-user cost. On-board switching/processing satellites potentially provide these features, allowing flexible interconnection among multiple spot beams, direct to the user communications services using very small aperture terminals (VSAT's), independent uplink and downlink access/transmission system designs optimized to user's traffic requirements, efficient TDM downlink transmission, and better link performance. A flexible switching system on the satellite in conjunction with low-cost user terminals will likely benefit future satellite network users
T-WAS and T-XAS algorithms for fiber-loop optical buffers
In optical packet/burst switched networks fiber loops provide a viable and compact means of contention resolution. For fixed size packets it is known that a basic void-avoiding schedule (VAS) can vastly outperform a more classical pre-reservation algorithm as FCFS. For the setting of a uniform distributed packet size and a restricted buffer size we proposed two novel forward-looking algorithms, WAS and XAS, that, in specific settings, outperform VAS up to 20% in terms of packet loss. This contribution extends the usage and improves the performance of the WAS and XAS algorithms by introducing an additional threshold variable. By optimizing this threshold, the process of selectively delaying packet longer than strictly necessary can be made more or less strict and as such be fitted to each setting. By Monte Carlo simulation it is shown that the resulting T-WAS and T-XAS algorithms are most effective for those instances where the algorithms without threshold can offer no or only limited performance improvement
Recommended from our members
Performance modelling of wormhole-routed hypercubes with bursty traffice and finite buffers
An open queueing network model (QNM) is proposed for wormhole-routed hypercubes with finite
buffers and deterministic routing subject to a compound Poisson arrival process (CPP) with geometrically
distributed batches or, equivalently, a generalised exponential (GE) interarrival time distribution. The GE/G/1/K
queue and appropriate GE-type flow formulae are adopted, as cost-effective building blocks, in a queue-by-queue
decomposition of the entire network. Consequently, analytic expressions for the channel holding time, buffering
delay, contention blocking and mean message latency are determined. The validity of the analytic approximations
is demonstrated against results obtained through simulation experiments. Moreover, it is shown that the wormholerouted
hypercubes suffer progressive performance degradation with increasing traffic variability (burstiness)
Priority Control in ATM Network for Multimedia Services
The communication network of the near future is going to be based on Asynchronous
Transfer Mode (ATM) which has widely been accepted by equipment vendors and
service providers. Statistical multiplexing technique, high transmission speed and
multimedia services render traditional approaches to network protocol and control
ineffective. The ATM technology is tailored to support data, voice and video traffic
using a common 53 byte fixed length cell based format with connection oriented
routing.
Traffic sources in A TM network such as coded video and bulk data
transfer are bursty. These sources generate cells at a near-peak rate during their active
period and generate few cells during relatively long inactive period. Severe network
congestion might occur as a consequence of this dynamic nature of bursty traffic.
Even though Call Admission Control (CAC) is appropriately carried out for deciding
acceptance of a new call, Quality of Service (QOS) may be beyond the requirement
limits as bursty traffic are piled up. So, priority control, in which traffic stream are
classified into several classes according to their QOS requirements and transferred
according to their priorities, becomes an important research issue in ATM network. There are basically two kinds of priority management schemes: time
priority scheme that gives higher priority to services requiring short delay time and
the space priority scheme that gives high priority cells requiring small cell loss ratio.
The possible drawbacks of these time and space priority schemes are the processing
overhead required for monitoring cells for priority change, especially in the case of
time priority schemes. Also, each arriving cell needs to be time stamped. The
drawback of the space priority scheme lies in the fact that buffer management
complexity increases when the buffer size becomes large because cell sequence
preservation requires a more complicated buffer management logic.
In this thesis, a Mixed Priority Queueing or MPQ scheme is proposed
which includes three distinct strategies for priority control method -- buffer
partitioning, allocation of cells into the buffer and service discipline. The MPQ
scheme is, by nature, a non-fixed priority method in which delay times and loss
probabilities of each service class are taken into account and both delay times and
loss probabilities can be controlled with less dependency compared with the fixed
priority method, where priority grant rule is fixed according to the service class, and
the priority is always given to the highest class cell among cells existing in the
buffer. The proposed priority control is executed independently at each switching
node as a local buffer management. Buffer partitioning is applied to overcome the
weakness of the single buffer
Design and analysis of a scalable terabit multicast packet switch : architecture and scheduling algorithms
Internet growth and success not only open a primary route of information exchange for millions of people around the world, but also create unprecedented demand for core network capacity. Existing switches/routers, due to the bottleneck from either switch architecture or arbitration complexity, can reach a capacity on the order of gigabits per second, but few of them are scalable to large capacity of terabits per second.
In this dissertation, we propose three novel switch architectures with cooperated scheduling algorithms to design a terabit backbone switch/router which is able to deliver large capacity, multicasting, and high performance along with Quality of Service (QoS). Our switch designs benefit from unique features of modular switch architecture and distributed resource allocation scheme.
Switch I is a unique and modular design characterized by input and output link sharing. Link sharing resolves output contention and eliminates speedup requirement for central switch fabric. Hence, the switch architecture is scalable to any large size. We propose a distributed round robin (RR) scheduling algorithm which provides fairness and has very low arbitration complexity. Switch I can achieve good performance under uniform traffic. However, Switch I does not perform well for non-uniform traffic.
Switch II, as a modified switch design, employs link sharing as well as a token ring to pursue a solution to overcome the drawback of Switch 1. We propose a round robin prioritized link reservation (RR+POLR) algorithm which results in an improved performance especially under non-uniform traffic. However, RR+POLR algorithm is not flexible enough to adapt to the input traffic. In Switch II, the link reservation rate has a great impact on switch performance.
Finally, Switch III is proposed as an enhanced switch design using link sharing and dual round robin rings. Packet forwarding is based on link reservation. We propose a queue occupancy based dynamic link reservation (QOBDLR) algorithm which can adapt to the input traffic to provide a fast and fair link resource allocation. QOBDLR algorithm is a distributed resource allocation scheme in the sense that dynamic link reservation is carried out according to local available information. Arbitration complexity is very low. Compared to the output queued (OQ) switch which is known to offer the best performance under any traffic pattern, Switch III not only achieves performance as good as the OQ switch, but also overcomes speedup problem which seriously limits the OQ switch to be a scalable switch design. Hence, Switch III would be a good choice for high performance, scalable, large-capacity core switches
- …