1,451 research outputs found

    Joint buffer management and scheduling for input queued switches

    Get PDF
    Input queued (IQ) switches are highly scalable and they have been the focus of many studies from academia and industry. Many scheduling algorithms have been proposed for IQ switches. However, they do not consider the buffer space requirement inside an IQ switch that may render the scheduling algorithms inefficient in practical applications. In this dissertation, the Queue Length Proportional (QLP) algorithm is proposed for IQ switches. QLP considers both the buffer management and the scheduling mechanism to obtain the optimal allocation region for both bandwidth and buffer space according to real traffic load. In addition, this dissertation introduces the Queue Proportional Fairness (QPF) criterion, which employs the cell loss ratio as the fairness metric. The research in this dissertation will show that the utilization of network resources will be improved significantly with QPF. Furthermore, to support diverse Quality of Service (QoS) requirements of heterogeneous and bursty traffic, the Weighted Minmax algorithm (WMinmax) is proposed to efficiently and dynamically allocate network resources. Lastly, to support traffic with multiple priorities and also to handle the decouple problem in practice, this dissertation introduces the multiple dimension scheduling algorithm which aims to find the optimal scheduling region in the multiple Euclidean space

    Explicit congestion control algorithms for available bit rate services in asynchronous transfer mode networks

    Get PDF
    Congestion control of available bit rate (ABR) services in asynchronous transfer mode (ATM) networks has been the recent focus of the ATM Forum. The focus of this dissertation is to study the impact of queueing disciplines on ABR service congestion control, and to develop an explicit rate control algorithm. Two queueing disciplines, namely, First-In-First-Out (FIFO) and per-VC (virtual connection) queueing, are examined. Performance in terms of fairness, throughput, cell loss rate, buffer size and network utilization are benchmarked via extensive simulations. Implementation complexity analysis and trade-offs associated with each queueing implementation are addressed. Contrary to the common belief, our investigation demonstrates that per-VC queueing, which is costlier and more complex, does not necessarily provide any significant improvement over simple FIFO queueing. A new ATM switch algorithm is proposed to complement the ABR congestion control standard. The algorithm is designed to work with the rate-based congestion control framework recently recommended by the ATM Forum for ABR services. The algorithm\u27s primary merits are fast convergence, high throughput, high link utilization, and small buffer requirements. Mathematical analysis is done to show that the algorithm converges to the max-min fair allocation rates in finite time, and the convergence time is proportional to the distinct number of fair allocations and the round-trip delays in the network. At the steady state, the algorithm operates without causing any oscillations in rates. The algorithm does not require any parameter tuning, and proves to be very robust in a large ATM network. The impact of ATM switching and ATM layer congestion control on the performance of TCP/IP traffic is studied and the results are presented. The study shows that ATM layer congestion control improves the performance of TCP/IP traffic over ATM, and implementing the proposed switch algorithm drastically reduces the required switch buffer requirements. In order to validate claims, many benchmark ATM networks are simulated, and the performance of the switch is evaluated in terms of fairness, link utilization, response time, and buffer size requirements. In terms of performance and complexity, the algorithm proposed here offers many advantages over other proposed algorithms in the literature

    Traffic Control in a Synchronous Transfer Mode Networks

    Get PDF
    In the 90s, there is an increasing demand for new telecommunication services such as video conferencing, videophone, broadcast television, image transfer and bulk file transfe r etc. At the same time, transmission systems at bit rates of 2.5 Gb/s are now being installed, and the expected next generation of 10 Gb/s systems is emerging from the research laboratories. Coupled with that the development and deployment of new technologies systems such as fiber optics and intelligent high-speed switches have made it possible to provide these services in future high-speed integrated services networks like Asynchronous Transfer Mode (ATM). However, because of their new characteristics, these new services pose great challenges not previously encountered in traditional circuitswitche d or packet switched networks. For example, feature s such as large propagation delay as compared to transmission delay, diverse application demands, constraints on call processing capacity, and Quality-Of-Service (QOS) support for different applications all present new challenges arising from the new technology and new applications. Thus, much research is needed not just to improve existing technologies, but to seek a fundamentally different approach toward network architectures and protocols. In particular, new bandwidth allocation and call admission control algorithms need to be studied to meet these new challenges. A VP bandwidth allocation problem is studied for services which requires guaranteed connection for a fixed duration of time leading to extensive use of facilities like reservations of transmission capacity in advance. In such a case, the network may offer discounts for users reserving capacities in advance due to the advantage of working with predetermined traffic loads. Similarly, charges may differ for customers wanting to book capacity for a specified tie interval. Based on this scenario, various charge classes and booking policies are introduced. An effective bandwidth allocation scheme is proposed at the VP level with multiple nested charge classes where these various classes are allocated bandwidth optimally through some booking policies'. The scheme is also shown to be effective in maximizing network revenue. The best tradeoff between revenue gained through greater demand for discount bandwidth units against revenue lost when full-charge bookings request must be turned away because of prior bookings of discount bandwidth units is also sought for

    TCP/IP traffic over ATM network with ABR flow and congestion control

    Get PDF
    Most traffics over the existing ATM network are generated by applications running over TCP/IP protocol stack. In the near future, the success of ATM technology will depend largely on how well it supports the huge legacy of existing TCP/IP applications. In this thesis, we study and compare the performance of TCP/IP traffic running on different rate based ABR flow control algorithms such as EFCI, ERICA and FMMRA by extensive simulations. Infinite source-end traffic behavior is chosen to represent, FTP application running on TCP/IP. Background VBR traffic with different ON-OFF frequency is introduced to produce transient network states as well as congestion. The simulations produce many insights on issues such as: ABR queue length in congested ATM switch, source-end ACR (Allowed Cell Rate), link utilization at congestion point, efficient end to end TCP throughput, the TCP congestion control window size, and the TCP round trip time. Based on the simulation results, zero cell loss switch buffer requirement of the three algorithms are compared, and the fairness of ABR bandwidth allocation among TCP connections are analyzed. The interaction between the TCP layer and the ATM layer flow and congestion control mechanism is analyzed. Our simulation results show that in order to get a good TCP throughput and affordable switch buffer requirement, some kind of switch queue length monitoring and control mechanism is necessary in the ABR. congestion algorithm

    Equilibrium bandwidth and buffer allocations for elastic traffics

    Get PDF
    Consider a set of users sharing a network node under an allocation scheme that provides each user with a fixed minimum and a random extra amount of bandwidth and buffer. Allocations and prices are adjusted to adapt to resource availability and user demands. Equilibrium is achieved when all users optimize their utility and demand equals supply for nonfree resources. We analyze two models of user behavior. We show that at equilibrium expected return on purchasing variable resources can be higher than that on fixed resources. Thus users must balance the marginal increase in utility due to higher return on variable resources and the marginal decrease in utility due to their variability. For the first user model we further show that at equilibrium where such tradeoff is optimized all users hold strictly positive amounts of variable bandwidth and buffer. For the second model we show that if both variable bandwidth and buffer are scarce then at equilibrium every user either holds both variable resources or none

    SIMULATIVE ANALYSIS OF ROUTING AND LINK ALLOCATION STRATEGIES IN ATM NETWORKS

    Get PDF
    For Broadband Integrated Services Digital (B-ISDN) networks ATM is a promising technology, because it supports a wide range of services with different bandwidth demands, traffic characteristics and QoS requirements. This diversity of services makes traffic control in these networks much more complicated than in existing circuit or packet switched networks. Traffic control procedures include both actions necessary for setting up virtual connections (VC), such as bandwidth assignment, call admission, routing and resource allocation and congestion control measures necessary to maintain throughput in overload situations. This paper deals with routing and link allocation, and analyses the performance of such algorithms in terms of call blocking probability, link capacity utilization and QoS parameters. In our model the network carries out the following steps when a call is offered to the network: (1) Assign an appropriate bandwidth to an offered call (Bandwidth assignment) (2) Find a transmission path between the source and destination with enough available transmission capacity (Routing) (3) Allocate resource along that path (Link allocation) We consider an example 5-node network [7], conduct an extensive survey of routing, and link allocation algorithms. Regarding step (1) we employ the equivalent link capacity assignment presented by various interesting papers [1]-[5]. We find that the choice of routing and link allocation algorithms has a great impact on network performance, and that different routing algorithms perform best under different network load values. Shortest path routing (SPR) is a good candidate for low, alternate routing (AR) for medium and non-alternate routing (NAR) for high traffic load values. Concerning link allocation strategies, we find that partial overlap (POL) strategies that seem to be able to present near optimal performance are superior to complete sharing (CS) and complete partitioning (CP) strategies. As a further improvement of the POL scheme, we propose a 2-level link allocation algorithm, which yields highest link utilization. In this scheme, not only the accesses of different service classes to different virtual paths (VPs) are controlled, but also an individual VP's transmission capacity is optimally allocated to the service classes according to their bandwidth requirements in order to assure high link utilization. This method seems to be adjustable to the fine degree of granularity of bandwidth demands in B-ISDN networks. It is shown that in order to minimize cell loss the call level resource allocation plays a significant role: networks with the same buffer size switches display different cell loss probabilities in the nodes and impose different end-to-end delay on cells if the link allocation and routing differ. Again, we find that when traffic is tolerable by the network, SPR causes the least cell loss. This can be explained by the fact that SPR spreads the incoming calls in the network. It eagerly seeks new routes instead of utilizing the already used but still not congested routes. SPR obviously wastes more rapidly link and buffer capacity as traffic load becomes higher than the AR, which chooses a new route only when it has to, i.e. when the route of higher priority becomes congested. That is why we experience that as soon as the SPR starts loosing cells, it indicates that available resources have been consumed and it rapidly goes up to very high blocking probabilities after a small further increase of load

    Marginal Productivity Indices and Linear Programming Relaxations for Dynamic Resource Allocation in Queueing Systems

    Get PDF
    Many problems concerning resource management in modern communication systems can be simplified to queueing models under Markovian assumptions. The computation of the optimal policy is however often hindered by the curse of dimensionality especially for models that support multiple traffic or job classes. The research focus naturally turns to computationally efficient bounds and high performance heuristics. In this thesis, we apply the indexability theory to the study of admission control of a single server queue and to the buffer sharing problem for a multi-class queueing system. Our main contributions are the following: we derive the Marginal Productivity Index (MPI) and give a sufficient indexability condition for the admission control model by viewing the buffer as the resource; we construct hierarchical Linear Programming (LP) relaxations for the buffer sharing problem and propose an MPI based heuristic with its performance evaluated by discrete event simulation. In our study, the admission control model is used as the building block for the MPI heuristic deployed for the buffer sharing problem. Our condition for indexability only requires that the reward function is concavelike. We also give the explicit non-recursive expression for the MPI calculation. We compare with the previous result of the indexability condition and the MPI for the admission control model that penalizes the rejection action. The study of hierarchical LP relaxations for the buffer sharing problem is based on the exact but intractable LP formulation of the continuous-time Markov Decision Process (MDP). The number of hierarchy levels is equal to the number of job classes. The last one in the hierarchy is exact and corresponds to the exponentially sized LP formulation of the MDP. The first order relaxation is obtained by relaxing the constraint that no buffer overflow may occur in any sample path to the constraint that the average buffer utilization does not exceed the available capacity. Based on the Lagrangian decomposition of the first order relaxation, we propose a heuristic policy based on the concept of MPI. Each one of the decomposed subproblems corresponds to the admission control model we described above. The link to the decomposed sub-problems is the Lagrangian multiplier for the relaxed buffer size constraint in the first order relaxation. Our simulation study indicates the near optimal performance of the heuristic in the (randomly generated) instances investigated

    The Design, modeling and simulation of switching fabrics: For an ATM network switch

    Get PDF
    The requirements of today\u27s telecommunication systems to support high bandwidth and added flexibility brought about the expansion of (Asynchronous Transfer Mode) ATM as a new method of high-speed data transmission. Various analytical and simulation methods may be used to estimate the performance of ATM switches. Analytical methods considerably limit the range of parameters to be evaluated due to extensive formulae used and time consuming iterations. They are not as effective for large networks because of excessive computations that do not scale linearly with network size. One the other hand, simulation-based methods allow determining a bigger range of performance parameters in a shorter amount of time even for large networks. A simulation model, however, is more elaborate in terms of implementation. Instead of using formulae to obtain results, it has to operate software or hardware modules requiring a certain amount of effort to create. In this work simulation is accomplished by utilizing the ATM library - an object oriented software tool, which uses software chips for building ATM switches. The distinguishing feature of this approach is cut-through routing realized on the bit level abstraction treating ATM protocol data units, called cells, as groups of 424 bits. The arrival events of cells to the system are not instantaneous contrary to commonly used methods of simulation that consider cells as instant messages. The simulation was run for basic multistage interconnection network types with varying source arrival rate and buffer sizes producing a set of graphs of cell delays, throughput, cell loss probability, and queue sizes. The techniques of rearranging and sorting were considered in the simulation. The results indicate that better performance is always achieved by bringing additional stages of elements to the switching system
    corecore