327 research outputs found

    Some aspects of traffic control and performance evaluation of ATM networks

    Get PDF
    The emerging high-speed Asynchronous Transfer Mode (ATM) networks are expected to integrate through statistical multiplexing large numbers of traffic sources having a broad range of statistical characteristics and different Quality of Service (QOS) requirements. To achieve high utilisation of network resources while maintaining the QOS, efficient traffic management strategies have to be developed. This thesis considers the problem of traffic control for ATM networks. The thesis studies the application of neural networks to various ATM traffic control issues such as feedback congestion control, traffic characterization, bandwidth estimation, and Call Admission Control (CAC). A novel adaptive congestion control approach based on a neural network that uses reinforcement learning is developed. It is shown that the neural controller is very effective in providing general QOS control. A Finite Impulse Response (FIR) neural network is proposed to adaptively predict the traffic arrival process by learning the relationship between the past and future traffic variations. On the basis of this prediction, a feedback flow control scheme at input access nodes of the network is presented. Simulation results demonstrate significant performance improvement over conventional control mechanisms. In addition, an accurate yet computationally efficient approach to effective bandwidth estimation for multiplexed connections is investigated. In this method, a feed forward neural network is employed to model the nonlinear relationship between the effective bandwidth and the traffic situations and a QOS measure. Applications of this approach to admission control, bandwidth allocation and dynamic routing are also discussed. A detailed investigation has indicated that CAC schemes based on effective bandwidth approximation can be very conservative and prevent optimal use of network resources. A modified effective bandwidth CAC approach is therefore proposed to overcome the drawback of conventional methods. Considering statistical multiplexing between traffic sources, we directly calculate the effective bandwidth of the aggregate traffic which is modelled by a two-state Markov modulated Poisson process via matching four important statistics. We use the theory of large deviations to provide a unified description of effective bandwidths for various traffic sources and the associated ATM multiplexer queueing performance approximations, illustrating their strengths and limitations. In addition, a more accurate estimation method for ATM QOS parameters based on the Bahadur-Rao theorem is proposed, which is a refinement of the original effective bandwidth approximation and can lead to higher link utilisation

    Traffic control mechanisms with cell rate simulation for ATM networks.

    Get PDF
    PhDAbstract not availabl

    Virtual path bandwidth distribution and capacity allocation with bandwidth sharing

    Get PDF
    Broadband high-speed networks, such as B-ISDN, are expected to play a dominant role in the future of networking due to their capability to service a variety of traffic types with very different bandwidth requirements such as video, voice and data. to increase network efficiency in B-ISDN and other such connection oriented networks, the concept of a virtual path (VP) has been proposed and studied in the literature. A VP is a permanent or semi-permanent reservation of capacity between two nodes. Using VPs can potentially reduce call setup delays, simplify hardware, provide quality of service performance guarantees, and reduce disruption in the event of link or node failure.;In order to use VPs efficiently, two problems must be solved. With the objective of optimizing network performance, (1) the VPs must be placed within the network, and (2) network link capacity must be divided among the VPs. Most previous work aimed at solving these problems has focused on one problem in isolation of the other. at the same time, previous research efforts that have considered the joint solution of these problems have considered only restricted cases. In addition, these efforts have not explicitly considered the benefits of sharing bandwidth among VPs in the network.;We present a heuristic solution method for the joint problem of virtual path distribution and capacity allocation without many of the limitations found in previous studies. Our solution method considers the joint bandwidth allocation and VP placement problem and explicitly considers the benefits of shared bandwidth. We demonstrate that our algorithm out-performs previous algorithms in cases where network resources are limited. Because our algorithm provides shared bandwidth, solutions found by our algorithm will have a lower setup probability than a network that does not use VPs as well as a lower loss probability than provided by VPDBA solutions produced by previous algorithms. In addition, our algorithm provides fairness not found in solutions produced by other algorithms by guaranteeing that some service will be provided to each source-destination pair within the network

    On scheduling input queued cell switches

    Get PDF
    Output-queued switching, though is able to offer high throughput, guaranteed delay and fairness, lacks scalability owing to the speed up problem. Input-queued switching, on the other hand, is scalable, and is thus becoming an attractive alternative. This dissertation presents three approaches toward resolving the major problem encountered in input-queued switching that has prohibited the provision of quality of service guarantees. First, we proposed a maximum size matching based algorithm, referred to as min-max fair input queueing (MFIQ), which minimizes the additional delay caused by back pressure, and at the same time provides fair service among competing sessions. Like any maximum size matching algorithm, MFIQ performs well for uniform traffic, in which the destinations of the incoming cells are uniformly distributed over all the outputs, but is not stable for non-uniform traffic. Subse-quently, we proposed two maximum weight matching based algorithms, longest normalized queue first (LNQF) and earliest due date first matching (EDDFM), which are stable for both uniform and non-uniform traffic. LNQF provides fairer service than longest queue first (LQF) and better traffic shaping than oldest cell first (OCF), and EDDEM has lower probability of delay overdue than LQF, LNQF, and OCF. Our third approach, referred to as store-sort-and-forward (SSF), is a frame based scheduling algorithm. SSF is proved to be able to achieve strict sense 100% throughput, and provide bounded delay and delay jitter for input-queued switches if the traffic conforms to the (r, T) model

    Statistical multiplexing and connection admission control in ATM networks

    Get PDF
    Asynchronous Transfer Mode (ATM) technology is widely employed for the transport of network traffic, and has the potential to be the base technology for the next generation of global communications. Connection Admission Control (CAC) is the effective traffic control mechanism which is necessary in ATM networks in order to avoid possible congestion at each network node and to achieve the Quality-of-Service (QoS) requested by each connection. CAC determines whether or not the network should accept a new connection. A new connection will only be accepted if the network has sufficient resources to meet its QoS requirements without affecting the QoS commitments already made by the network for existing connections. The design of a high-performance CAC is based on an in-depth understanding of the statistical characteristics of the traffic sources

    Performance Analysis of a Dynamic Bandwidth Allocation Algorithm in a Circuit-Switched Communications Network

    Get PDF
    Military communications networks typically employ a gateway multiplexer to aggregate all communications traffic onto a single link. These multiplexers typically use a static bandwidth allocation method via time-division multiplexing (TDM). Inefficiencies occur when a high-bandwidth circuit, e.g., a video teleconferencing circuit, is relatively inactive rendering a considerable portion of the aggregate bandwidth wasted while inactive. Dynamic bandwidth allocation (DBA) reclaims unused bandwidth from circuits with low utilization and reallocates it to circuits with higher utilization without adversely affecting queuing delay. The proposed DBA algorithm developed here measures instantaneous utilization by counting frames arriving during the transmission time of a single frame on the aggregate link. The maximum calculated utilization observed over a monitoring period is then used to calculate the bandwidth available for reallocation. A key advantage of the proposed approach is that it can be applied now and to existing systems supporting heterogeneous permanent virtual circuits. With the inclusion of DBA, military communications networks can bring information to the warfighter more efficiently and in a shorter time even for small bandwidths allocated to deployed sites. The algorithm is general enough to be applied to multiple TDM platforms and robust enough to function at any line speed, making it a viable option for high-speed multiplexers. The proposed DBA algorithm provides a powerful performance boost by optimizing available resources of the communications network. Utilization results indicate the proposed DBA algorithm significantly out-performs the static allocation model in all cases. The best configuration uses a 65536 bps allocation granularity and a 10 second monitoring period. Utilization gains observed with this configuration were almost 17% over the static allocation method. Queuing delays increased by 50% but remained acceptable, even for realtime traffic
    • …
    corecore