238 research outputs found

    On-board closed-loop congestion control for satellite based packet switching networks

    Get PDF
    NASA LeRC is currently investigating a satellite architecture that incorporates on-board packet switching capability. Because of the statistical nature of packet switching, arrival traffic may fluctuate and thus it is necessary to integrate congestion control mechanism as part of the on-board processing unit. This study focuses on the closed-loop reactive control. We investigate the impact of the long propagation delay on the performance and propose a scheme to overcome the problem. The scheme uses a global feedback signal to regulate the packet arrival rate of ground stations. In this scheme, the satellite continuously broadcasts the status of its output buffer and the ground stations respond by selectively discarding packets or by tagging the excessive packets as low-priority. The two schemes are evaluated by theoretical queuing analysis and simulation. The former is used to analyze the simplified model and to determine the basic trends and bounds, and the later is used to assess the performance of a more realistic system and to evaluate the effectiveness of more sophisticated control schemes. The results show that the long propagation delay makes the closed-loop congestion control less responsive. The broadcasted information can only be used to extract statistical information. The discarding scheme needs carefully-chosen status information and reduction function, and normally requires a significant amount of ground discarding to reduce the on-board packet loss probability. The tagging scheme is more effective since it tolerates more uncertainties and allows a larger margin of error in status information. It can protect the high-priority packets from excessive loss and fully utilize the downlink bandwidth at the same time

    Traffic Management and Congestion Control in the ATM Network Model.

    Get PDF
    Asynchronous Transfer Mode (ATM) networking technology has been chosen by the International Telegraph and Telephony Consultative Committee (CCITT) for use on future local as well as wide area networks to handle traffic types of a wide range. It is a cell based network architecture that resembles circuit switched networks, providing Quality of Service (QoS) guarantees not normally found on data networks. Although the specifications for the architecture have been continuously evolving, traffic congestion management techniques for ATM networks have not been very well defined yet. This thesis studies the traffic management problem in detail, provides some theoretical understanding and presents a collection of techniques to handle the problem under various operating conditions. A detailed simulation of various ATM traffic types is carried out and the collected data is analyzed to gain an insight into congestion formation patterns. Problems that may arise during migration planning from legacy LANs to ATM technology are also considered. We present an algorithm to identify certain portions of the network that should be upgraded to ATM first. The concept of adaptive burn-in is introduced to help ease the computational costs involved in virtual circuit setup and tear down operations

    Dynamic bandwidth allocation in ATM networks

    Get PDF
    Includes bibliographical references.This thesis investigates bandwidth allocation methodologies to transport new emerging bursty traffic types in ATM networks. However, existing ATM traffic management solutions are not readily able to handle the inevitable problem of congestion as result of the bursty traffic from the new emerging services. This research basically addresses bandwidth allocation issues for bursty traffic by proposing and exploring the concept of dynamic bandwidth allocation and comparing it to the traditional static bandwidth allocation schemes

    Performance Analysis of an ATM MUX with a New Space Priority Mechanism under ON-OFF Arrival Processes

    Get PDF
    Abstract: We propose a new space priority mechanism, and analyze its performance in a single Constant Bit Rate (CBR) server. The arrival process is derived from the superposition of two types of traffics, each in turn results from the superposition of homogeneous ON-OFF sources that can be approximated by means of a two-state Markov Modulated Poisson Process (MMPP). The buffer mechanism enables the Asynchronous Transfer Mode (ATM) layer to adapt the quality of the cell transfer to the Quality of Service (QoS) requirements and to improve the utilization of network resources. This is achieved by "Selective-Delaying and Pushing-In" (SDPI) cells according to the class they belong to. The scheme is applicable to schedule delay-tolerant non-real time traffic and delaysensitive real time traffic. Analytical expressions for various performance parameters and numerical results are obtained. Simulation results in term of cell loss probability conform with our numerical analysis

    Dynamic Time Windows and Generalized Virtual Clocks-Combined Closed-Loop/Open-Loop Mechanisms for Congestion Control of Data Traffic in High Speed Wide Area Networks

    Get PDF
    This paper presents a set of mechanisms for congestion control of data traffic in high speed wide area networks (HSWANs) along with preliminary performance results. The model of the network assumes reservation of resources based on average requirements. The mechanisms address (a) the different network time constants (short term and medium-term), (b) admission control that allows controlled variance of traffic as a function of medium-term congestion, and (c) prioritized scheduling which is based on a new fairness criterion. This latter criterion is perceived as the appropriate fairness measure for HSWANs. Preliminary performance studies show that the queue length statistics at switching nodes (mean, variance and max) are approximately proportional to the end-point \u27time window\u27 size. Further, * when network utilization approaches unity, the time window mechanism can protect the network from buffer overruns and excessive queueing delays, and * when network utilization level is smaller, the time window may be increased to allow a controlled amount of variance that attempts to simultaneously meet the performance goals of the end-user and that of the network. The prioritized scheduling algorithms proposed and studied in this paper are a generalization of the Virtual Clock algorithm [Zhang 1989]. The study here investigates * necessary and sufficient conditions for accomplishing desired fairness, * simulation and (limited analytical results for expected waiting times, * ability to protect against misbehaving users, and * relationship between end-point admission control (Time-Window) and internal scheduling (\u27Pulse\u27 and Virtual Clock) at the switch

    Simulation and analytical performance studies of generic atm switch fabrics.

    Get PDF
    As technology improves exciting new services such as video phone become possible and economically viable but their deployment is hampered by the inability of the present networks to carry them. The long term vision is to have a single network able to carry all present and future services. Asynchronous Transfer Mode, ATM, is the versatile new packet -based switching and multiplexing technique proposed for the single network. Interest in ATM is currently high as both industrial and academic institutions strive to understand more about the technique. Using both simulation and analysis, this research has investigated how the performance of ATM switches is affected by architectural variations in the switch fabric design and how the stochastic nature of ATM affects the timing of constant bit rate services. As a result the research has contributed new ATM switch performance data, a general purpose ATM switch simulator and analytic models that further research may utilise and has uncovered a significant timing problem of the ATM technique. The thesis will also be of interest and assistance to anyone planning on using simulation as a research tool to model an ATM switch

    Flit Scheduling for Cut-through Switching: Towards Near-Zero End-to-end Latency

    Get PDF
    Achieving low end-to-end latency with high reliability is one of the key objectives for future mission-critical applications, like the Tactile Internet and real-time interactive Virtual/Augmented Reality (VR/AR). To serve the purpose, cut-through (CT) switching is a promising approach to significantly reduce the transmission delay of store-and-forward switching, via flit-ization of a packet and concurrent forwarding of the flits belonging to the same packet. CT switching, however, has been applied only to well-controlled scenarios like network-on-chip and data center networks, and hence flit scheduling in heterogeneous environments (e.g., the Internet and wide area network) has been given little attention. This paper tries to fill the gap to facilitate the adoption of CT switching in the general-purpose data networks. In particular, we first introduce a packet discarding technique that sheds the packet expected to violate its delay requirement and then propose two flit scheduling algorithms, fEDF (flit-based Earliest Deadline First) and fSPF (flit-based Shortest Processing-time First), aiming at enhancing both reliability and end-to-end latency. Considering packet delivery ratio (PDR) as a reliability metric, we performed extensive simulations to show that the proposed scheduling algorithms can enhance PDR by up to 30.11% (when the delay requirement is 7 ms) and the average end-to-end latency by up to 13.86% (when the delay requirement is 10 ms), against first-in first-out (FIFO) scheduling
    corecore