571 research outputs found

    Statistical multiplexing and connection admission control in ATM networks

    Get PDF
    Asynchronous Transfer Mode (ATM) technology is widely employed for the transport of network traffic, and has the potential to be the base technology for the next generation of global communications. Connection Admission Control (CAC) is the effective traffic control mechanism which is necessary in ATM networks in order to avoid possible congestion at each network node and to achieve the Quality-of-Service (QoS) requested by each connection. CAC determines whether or not the network should accept a new connection. A new connection will only be accepted if the network has sufficient resources to meet its QoS requirements without affecting the QoS commitments already made by the network for existing connections. The design of a high-performance CAC is based on an in-depth understanding of the statistical characteristics of the traffic sources

    Some aspects of traffic control and performance evaluation of ATM networks

    Get PDF
    The emerging high-speed Asynchronous Transfer Mode (ATM) networks are expected to integrate through statistical multiplexing large numbers of traffic sources having a broad range of statistical characteristics and different Quality of Service (QOS) requirements. To achieve high utilisation of network resources while maintaining the QOS, efficient traffic management strategies have to be developed. This thesis considers the problem of traffic control for ATM networks. The thesis studies the application of neural networks to various ATM traffic control issues such as feedback congestion control, traffic characterization, bandwidth estimation, and Call Admission Control (CAC). A novel adaptive congestion control approach based on a neural network that uses reinforcement learning is developed. It is shown that the neural controller is very effective in providing general QOS control. A Finite Impulse Response (FIR) neural network is proposed to adaptively predict the traffic arrival process by learning the relationship between the past and future traffic variations. On the basis of this prediction, a feedback flow control scheme at input access nodes of the network is presented. Simulation results demonstrate significant performance improvement over conventional control mechanisms. In addition, an accurate yet computationally efficient approach to effective bandwidth estimation for multiplexed connections is investigated. In this method, a feed forward neural network is employed to model the nonlinear relationship between the effective bandwidth and the traffic situations and a QOS measure. Applications of this approach to admission control, bandwidth allocation and dynamic routing are also discussed. A detailed investigation has indicated that CAC schemes based on effective bandwidth approximation can be very conservative and prevent optimal use of network resources. A modified effective bandwidth CAC approach is therefore proposed to overcome the drawback of conventional methods. Considering statistical multiplexing between traffic sources, we directly calculate the effective bandwidth of the aggregate traffic which is modelled by a two-state Markov modulated Poisson process via matching four important statistics. We use the theory of large deviations to provide a unified description of effective bandwidths for various traffic sources and the associated ATM multiplexer queueing performance approximations, illustrating their strengths and limitations. In addition, a more accurate estimation method for ATM QOS parameters based on the Bahadur-Rao theorem is proposed, which is a refinement of the original effective bandwidth approximation and can lead to higher link utilisation

    Two-dimensional fluid queues with temporary assistance

    Full text link
    We consider a two-dimensional stochastic fluid model with NN ON-OFF inputs and temporary assistance, which is an extension of the same model with N=1N = 1 in Mahabhashyam et al. (2008). The rates of change of both buffers are piecewise constant and dependent on the underlying Markovian phase of the model, and the rates of change for Buffer 2 are also dependent on the specific level of Buffer 1. This is because both buffers share a fixed output capacity, the precise proportion of which depends on Buffer 1. The generalization of the number of ON-OFF inputs necessitates modifications in the original rules of output-capacity sharing from Mahabhashyam et al. (2008) and considerably complicates both the theoretical analysis and the numerical computation of various performance measures

    Non-Intrusive Measurement in Packet Networks and its Applications

    Get PDF
    PhDNetwork measurementis becoming increasingly important as a meanst o assesst he performanceo f packet networks. Network performance can involve different aspects such as availability, link failure detection etc, but in this thesis, we will focus on Quality of Service (QoS). Among the metrics used to define QoS, we are particularly interested in end-to-end delay performance. Recently, the adoption of Service Level Agreements (SLA) between network operators and their customersh as becomea major driving force behind QoS measurementm: easurementi s necessaryt o produce evidence of fulfilment of the requirements specified in the SLA. Many attempts to do QoS based packet level measurement have been based on Active Measurement, in which the properties of the end-to-end path are tested by adding testing packets generated from the sending end. The main drawback of active probing is its intrusive nature which causes extraburden on the network, and has been shown to distort the measured condition of the network. The other category of network measurement is known as Passive Measurement. In contrast to Active Measurement, there are no testing packets injected into the network, therefore no intrusion is caused. The proposed applications using Passive Measurement are currently quite limited. But Passive Measurement may offer the potential for an entirely different perspective compared with Active Measurements In this thesis, the objective is to develop a measurement methodology for the end-to-end delay performance based on Passive Measurement. We assume that the nodes in a network domain are accessible.F or example, a network domain operatedb y a single network operator. The novel idea is to estimate the local per-hop delay distribution based on a hybrid approach (model and measurement-based)W. ith this approach,t he storagem easurementd ata requirement can be greatly alleviated and the overhead put in each local node can be minimized, so maintaining the fast switching operation in a local switcher or router. Per-hop delay distributions have been widely used to infer QoS at a single local node. However, the end-to-end delay distribution is more appropriate when quantifying delays across an end-to-end path. Our approach is to capture every local node's delay distribution, and then the end-to-end delay distribution can be obtained by convolving the estimated delay distributions. In this thesis, our algorithm is examined by comparing the proximity of the actual end-to-end delay distribution with the estimated one obtained by our measurement method under various conditions. e. g. in the presence of Markovian or Power-law traffic. Furthermore, the comparison between Active Measurement and our scheme is also studied. 2 Network operators may find our scheme useful when measuring the end-to-end delay performance. As stated earlier, our scheme has no intrusive effect. Furthermore, the measurement result in the local node can be re-usable to deduce other paths' end-to-end delay behaviour as long as this local node is included in the path. Thus our scheme is more scalable compared with active probing

    Predicting Internet Bandwidth in Educational Institutions using Langrage’S Interpolation

    Get PDF
    This paper addresses the solution to the problem of Internet Bandwidth optimization and prediction in the institution of higher learning in Nigeria. The operation of the link-load balancer which provides an efficient cost-effective and easy-to-use solution to maximize utilization and availability of internet access is extensively discussed. This enables enterprises to lease for two or three ISP links connecting the internal network to the internet. The paper also proposes the application of the Langrage’s method of interpolation for the predictability of internet bandwidth in the institutions. The analysis provides a unique graphical solution of effective actual bandwidth (Mbps) and the corresponding acceptable number of internet Users (‘000) in the institutions. The prediction allows us to view the actual internet bandwidth and the acceptable number of internet Users as the population of users’ increases. Keywords: Internet Bandwidth, Optimization, Link-Load Balancer, Prediction, Maximized Utilization, Availability of Internet access

    Dynamic bandwidth allocation in ATM networks

    Get PDF
    Includes bibliographical references.This thesis investigates bandwidth allocation methodologies to transport new emerging bursty traffic types in ATM networks. However, existing ATM traffic management solutions are not readily able to handle the inevitable problem of congestion as result of the bursty traffic from the new emerging services. This research basically addresses bandwidth allocation issues for bursty traffic by proposing and exploring the concept of dynamic bandwidth allocation and comparing it to the traditional static bandwidth allocation schemes

    Internet Data Bandwidth Optimization and Prediction in Higher Learning Institutions Using Lagrange’s Interpolation: A Case of Lagos State University of Science and Technology

    Get PDF
    This research work studies the performance of the internet services of institution of higher learning in Nigeria. Data was collated from Lagos State University of Science and Technology (LASUSTECH) as case study of this research work. The problem of Internet Bandwidth optimization in the institution of higher learning in Nigeria was extensively addressed in this paper. The operation of the Link-Load balancer which provides an efficient cost-effective and easy-to-use solution to maximize utilization and availability of Internet access is discussed. In this research work, the Lagrange’s method of interpolation was used to predict effective internet data bandwidth for significantly increasing number of internet users. The linear Lagrange’s interpolation model (LILAGRINT model) was proposed for LASUSTECH.  The predictions allow us to view the effective internet data bandwidth with respect to the corresponding acceptable number of internet users as the number of user’s increases. The integrity of the model was examined, verified and validated at the ICT department of the institution. The LILAGRINT model was integrated into the management of ICT and tested. The result showed that the proposed LILAGRINT model proved to be highly effective and innovative in the area of internet data bandwidth predictability. Keywords:Internet Data Bandwidth, Optimization, Link-load balancer, Lagrange’s interpolation, Predictions, Management of ICT DOI: 10.7176/CEIS/10-1-04 Publication date:September 30th 202

    Stochastic Dynamic Programming and Stochastic Fluid-Flow Models in the Design and Analysis of Web-Server Farms

    Get PDF
    A Web-server farm is a specialized facility designed specifically for housing Web servers catering to one or more Internet facing Web sites. In this dissertation, stochastic dynamic programming technique is used to obtain the optimal admission control policy with different classes of customers, and stochastic uid- ow models are used to compute the performance measures in the network. The two types of network traffic considered in this research are streaming (guaranteed bandwidth per connection) and elastic (shares available bandwidth equally among connections). We first obtain the optimal admission control policy using stochastic dynamic programming, in which, based on the number of requests of each type being served, a decision is made whether to allow or deny service to an incoming request. In this subproblem, we consider a xed bandwidth capacity server, which allocates the requested bandwidth to the streaming requests and divides all of the remaining bandwidth equally among all of the elastic requests. The performance metric of interest in this case will be the blocking probability of streaming traffic, which will be computed in order to be able to provide Quality of Service (QoS) guarantees. Next, we obtain bounds on the expected waiting time in the system for elastic requests that enter the system. This will be done at the server level in such a way that the total available bandwidth for the requests is constant. Trace data will be converted to an ON-OFF source and fluid- flow models will be used for this analysis. The results are compared with both the mean waiting time obtained by simulating real data, and the expected waiting time obtained using traditional queueing models. Finally, we consider the network of servers and routers within the Web farm where data from servers flows and merges before getting transmitted to the requesting users via the Internet. We compute the waiting time of the elastic requests at intermediate and edge nodes by obtaining the distribution of the out ow of the upstream node. This out ow distribution is obtained by using a methodology based on minimizing the deviations from the constituent in flows. This analysis also helps us to compute waiting times at different bandwidth capacities, and hence obtain a suitable bandwidth to promise or satisfy the QoS guarantees. This research helps in obtaining performance measures for different traffic classes at a Web-server farm so as to be able to promise or provide QoS guarantees; while at the same time helping in utilizing the resources of the server farms efficiently, thereby reducing the operational costs and increasing energy savings

    Bandwidth allocation in ATM networks: heuristic approach

    Get PDF
    科研費報告書収録論文(課題番号:09680388・基盤研究(C)(2)・H9~H10/研究代表者:根元, 義章/情報フィルタリングを用いた大規模情報ネットワークのリアルタイム障害検出方式
    corecore