865 research outputs found

    SIMULATIVE ANALYSIS OF ROUTING AND LINK ALLOCATION STRATEGIES IN ATM NETWORKS

    Get PDF
    For Broadband Integrated Services Digital (B-ISDN) networks ATM is a promising technology, because it supports a wide range of services with different bandwidth demands, traffic characteristics and QoS requirements. This diversity of services makes traffic control in these networks much more complicated than in existing circuit or packet switched networks. Traffic control procedures include both actions necessary for setting up virtual connections (VC), such as bandwidth assignment, call admission, routing and resource allocation and congestion control measures necessary to maintain throughput in overload situations. This paper deals with routing and link allocation, and analyses the performance of such algorithms in terms of call blocking probability, link capacity utilization and QoS parameters. In our model the network carries out the following steps when a call is offered to the network: (1) Assign an appropriate bandwidth to an offered call (Bandwidth assignment) (2) Find a transmission path between the source and destination with enough available transmission capacity (Routing) (3) Allocate resource along that path (Link allocation) We consider an example 5-node network [7], conduct an extensive survey of routing, and link allocation algorithms. Regarding step (1) we employ the equivalent link capacity assignment presented by various interesting papers [1]-[5]. We find that the choice of routing and link allocation algorithms has a great impact on network performance, and that different routing algorithms perform best under different network load values. Shortest path routing (SPR) is a good candidate for low, alternate routing (AR) for medium and non-alternate routing (NAR) for high traffic load values. Concerning link allocation strategies, we find that partial overlap (POL) strategies that seem to be able to present near optimal performance are superior to complete sharing (CS) and complete partitioning (CP) strategies. As a further improvement of the POL scheme, we propose a 2-level link allocation algorithm, which yields highest link utilization. In this scheme, not only the accesses of different service classes to different virtual paths (VPs) are controlled, but also an individual VP's transmission capacity is optimally allocated to the service classes according to their bandwidth requirements in order to assure high link utilization. This method seems to be adjustable to the fine degree of granularity of bandwidth demands in B-ISDN networks. It is shown that in order to minimize cell loss the call level resource allocation plays a significant role: networks with the same buffer size switches display different cell loss probabilities in the nodes and impose different end-to-end delay on cells if the link allocation and routing differ. Again, we find that when traffic is tolerable by the network, SPR causes the least cell loss. This can be explained by the fact that SPR spreads the incoming calls in the network. It eagerly seeks new routes instead of utilizing the already used but still not congested routes. SPR obviously wastes more rapidly link and buffer capacity as traffic load becomes higher than the AR, which chooses a new route only when it has to, i.e. when the route of higher priority becomes congested. That is why we experience that as soon as the SPR starts loosing cells, it indicates that available resources have been consumed and it rapidly goes up to very high blocking probabilities after a small further increase of load

    Design of traffic shaper / scheduler for packet switches and DiffServ networks : algorithms and architectures

    Get PDF
    The convergence of communications, information, commerce and computing are creating a significant demand and opportunity for multimedia and multi-class communication services. In such environments, controlling the network behavior and guaranteeing the user\u27s quality of service is required. A flexible hierarchical sorting architecture which can function either as a traffic shaper or a scheduler according to the requirement of the traffic load is presented to meet the requirement. The core structure can be implemented as a hierarchical traffic shaper which can support a large number of connections with a wide variety of rates and burstiness without the loss of the granularity in cells\u27 conforming departure time. The hierarchical traffic shaper can implement the exact sorting scheme with a substantial reduced memory size by using two stages of timing queues, and with substantial reduction in complexity, without introducing any sorting inaccuracy. By setting a suitable threshold to the length of the departure queue and using a lookahead algorithm, the core structure can be converted to a hierarchical rateadaptive scheduler. Based on the traffic load, it can work as an exact sorting traffic shaper or a Generic Cell Rate Algorithm (GCRA) scheduler. Such a rate-adaptive scheduler can reduce the Cell Transfer Delay and the Maximum Memory Occupancy greatly while keeping the fairness in the bandwidth assignment which is the inherent characteristic of GCRA. By introducing a best-effort queue to accommodate besteffort traffic, the hierarchical sorting architecture can be changed to a near workconserving scheduler. It assigns remaining bandwidth to the best-effort traffic so that it improves the utilization, of the outlink while it guarantees the quality of service requirements of those services which require quality of service guarantees. The inherent flexibility of the hierarchical sorting architecture combined with intelligent algorithms determines its multiple functions. Its implementation not only can manage buffer and bandwidth resources effectively, but also does not require no more than off-the-shelf hardware technology. The correlation of the extra shaping delay and the rate of the connections is revealed, and an improved fair traffic shaping algorithm, Departure Event Driven plus Completing Service Time Resorting algorithm, is presented. The proposed algorithm introduces a resorting process into Departure Event Driven Traffic Shaping Algorithm to resolve the contention of multiple cells which are all eligible for transmission in the traffic shaper. By using the resorting process based on each connection\u27s rate, better fairness and flexibility in the bandwidth assignment for connections with wide range of rates can be given. A Dual Level Leaky Bucket Traffic Shaper(DLLBTS) architecture is proposed to be implemented at the edge nodes of Differentiated Services Networks in order to facilitate the quality of service management process. The proposed architecture can guarantee not only the class-based Service Level Agreement, but also the fair resource sharing among flows belonging to the same class. A simplified DLLBTS architecture is also given, which can achieve the goals of DLLBTS while maintain a very low implementation complexity so that it can be implemented with the current VLSI technology. In summary, the shaping and scheduling algorithms in the high speed packet switches and DiffServ networks are studied, and the intelligent implementation schemes are proposed for them

    ATM virtual connection performance modeling

    Get PDF

    Application of integrated modeling technique for data services resource allocation in atm based private wan

    Get PDF
    Computer simulation modelling has the advantage of flexibility and modelling accuracy. However, it has limitations in its ability to be used to simulate cell loss rate when deriving the optimum resources required for data services to guarantee specific Quality of Service (QoS) requirement. Cell loss rates are simulated with excessive and unacceptable computer simulation run times. This limitation was overcome in an earlier publication by the author using an integrated simulation technique. This paper, therefore, describes the application of the integrated simulation technique for deriving the optimum resources required for data services in an asynchronous transfer mode (ATM) based private wide area network (WAN) to guarantee specific QoS requirement. The simulation tool drastically cuts the simulation run-time and is much more accurate. The effectiveness of the technique with data services is demonstrated by the simulation of an ATM switching node and in comparison with traditional approach.Keywords: Asynchronous transfer mode node; integrated modelling; simulation; Cell los

    Self-Similarity in a multi-stage queueing ATM switch fabric

    Get PDF
    Recent studies of digital network traffic have shown that arrival processes in such an environment are more accurately modeled as a statistically self-similar process, rather than as a Poisson-based one. We present a simulation of a combination sharedoutput queueing ATM switch fabric, sourced by two models of self-similar input. The effect of self-similarity on the average queue length and cell loss probability for this multi-stage queue is examined for varying load, buffer size, and internal speedup. The results using two self-similar input models, Pareto-distributed interarrival times and a Poisson-Zeta ON-OFF model, are compared with each other and with results using Poisson interarrival times and an ON-OFF bursty traffic source with Ge ometrically distributed burst lengths. The results show that at a high utilization and at a high degree of self-similarity, switch performance improves slowly with increasing buffer size and speedup, as compared to the improvement using Poisson-based traffic

    Architecture design and performance analysis of practical buffered-crossbar packet switches

    Get PDF
    Combined input crosspoint buffered (CICB) packet switches were introduced to relax inputoutput arbitration timing and provide high throughput under admissible traffic. However, the amount of memory required in the crossbar of an N x N switch is N2x k x L, where k is the crosspoint buffer size and needs to be of size RTT in cells, L is the packet size. RTT is the round-trip time which is defined by the distance between line cards and switch fabric. When the switch size is large or RTT is not negligible, the memory amount required makes the implementation costly or infeasible for buffered crossbar switches. To reduce the required memory amount, a family of shared memory combined-input crosspoint-buffered (SMCB) packet switches, where the crosspoint buffers are shared among inputs, are introduced in this thesis. One of the proposed switches uses a memory speedup of in and dynamic memory allocation, and the other switch avoids speedup by arbitrating the access of inputs to the crosspoint buffers. These two switches reduce the required memory of the buffered crossbar by 50% or more and achieve equivalent throughput under independent and identical traffic with uniform distributions when using random selections. The proposed mSMCB switch is extended to support differentiated services and long RTT. To support P traffic classes with different priorities, CICB switches have been reported to use N2x k x L x P amount of memory to avoid blocking of high priority cells.The proposed SMCB switch with support for differentiated services requires 1/mP of the memory amount in the buffered crossbar and achieves similar throughput performance to that of a CICB switch with similar priority management, while using no speedup in the shared memory. The throughput performance of SMCB switch with crosspoint buffers shared by inputs (I-SMCB) is studied under multicast traffic. An output-based shared-memory crosspoint buffered (O-SMCB) packet switch is proposed where the crosspoint buffers are shared by two outputs and use no speedup. The proposed O-SMCB switch provides high performance under admissible uniform and nonuniform multicast traffic models while using 50% of the memory used in CICB switches. Furthermore, the O-SMCB switch provides higher throughput than the I-SMCB switch. As SMCB switches can efficiently support an RTT twice as long as that supported by CICB switches and as the performance of SMCB switches is bounded by a matching between inputs and crosspoint buffers, a new family of CICB switches with flexible access to crosspoint buffers are proposed to support longer RTTs than SMCB switches and to provide higher throughput under a wide variety of admissible traffic models. The CICB switches with flexible access allow an input to use any available crosspoint buffer at a given output. The proposed switches reduce the required crosspoint buffer size by a factor of N , keep the service of cells in sequence, and use no speedup. This new class of switches achieve higher throughput performance than CICB switches under a large variety of traffic models, while supporting long RTTs. Crosspoint buffered switches that are implemented in single chips have limited scalability. To support a large number of ports in crosspoint buffered switches, memory-memory-memory (MMM) Clos-network switches are an alternative. The MMM switches that use minimum memory amount at the central module is studied. Although, this switch can provide a moderate throughput, MMM switch may serve cells out of sequence. As keeping cells in sequence in an MMM switch may require buffers be distributed per flow, an MMM with extended memory in the switch modules is studied. To solve the out of sequence problem in MMM switches, a queuing architecture is proposed for an MMM switch. The service of cells in sequence is analyzed

    Buffer management and cell switching management in wireless packet communications

    Get PDF
    The buffer management and the cell switching (e.g., packet handoff) management using buffer management scheme are studied in Wireless Packet Communications. First, a throughput improvement method for multi-class services is proposed in Wireless Packet System. Efficient traffic management schemes should be developed to provide seamless access to the wireless network. Specially, it is proposed to regulate the buffer by the Selective- Delay Push-In (SDPI) scheme, which is applicable to scheduling delay-tolerant non-real time traffic and delay-sensitive real time traffic. Simulation results show that the performance observed by real time traffics are improved as compared to existing buffer priority scheme in term of packet loss probability. Second, the performance of the proposed SDPI scheme is analyzed in a single CBR server. The arrival process is derived from the superposition of two types of traffics, each in turn results from the superposition of homogeneous ON-OFF sources that can be approximated by means of a two-state Markov Modulated Poisson Process (MMPP). The buffer mechanism enables the ATM layer to adapt the quality of the cell transfer to the QoS requirements and to improve the utilization of network resources. This is achieved by selective-delaying and pushing-in cells according to the class they belong to. Analytical expressions for various performance parameters and numerical results are obtained. Simulation results in term of cell loss probability conform with our numerical analysis. Finally, a novel cell-switching scheme based on TDMA protocol is proposed to support QoS guarantee for the downlink. The new packets and handoff packets for each type of traffic are defined and a new cutoff prioritization scheme is devised at the buffer of the base station. A procedure to find the optimal thresholds satisfying the QoS requirements is presented. Using the ON-OFF approximation for aggregate traffic, the packet loss probability and the average packet delay are computed. The performance of the proposed scheme is evaluated by simulation and numerical analysis in terms of packet loss probability and average packet delay
    • …
    corecore