606 research outputs found

    Approximation to a behavioral model for estimating traffic aggregation scenarios

    Get PDF
    This article provides a comparison among different methods for estimating the aggregation of Internet traffic resulting from different users, network-access types and corresponding services. Some approximate models usually used as isolated methods are combined with a temporally scaled ON-OFF model with binomial approximations. The aggregation problem is solved using a new form of parameterization based on the composition of the source traffic accordingly to the concrete characteristics of the users, the accesses and the services. This is a new concept, called CASUAL, included within an overall network planning methodology for the design and dimensioning of Next Generation Internet

    Packet Loss in Terrestrial Wireless and Hybrid Networks

    Get PDF
    The presence of both a geostationary satellite link and a terrestrial local wireless link on the same path of a given network connection is becoming increasingly common, thanks to the popularity of the IEEE 802.11 protocol. The most common situation where a hybrid network comes into play is having a Wi-Fi link at the network edge and the satellite link somewhere in the network core. Example of scenarios where this can happen are ships or airplanes where Internet connection on board is provided through a Wi-Fi access point and a satellite link with a geostationary satellite; a small office located in remote or isolated area without cabled Internet access; a rescue team using a mobile ad hoc Wi-Fi network connected to the Internet or to a command centre through a mobile gateway using a satellite link. The serialisation of terrestrial and satellite wireless links is problematic from the point of view of a number of applications, be they based on video streaming, interactive audio or TCP. The reason is the combination of high latency, caused by the geostationary satellite link, and frequent, correlated packet losses caused by the local wireless terrestrial link. In fact, GEO satellites are placed in equatorial orbit at 36,000 km altitude, which takes the radio signal about 250 ms to travel up and down. Satellite systems exhibit low packet loss most of the time, with typical project constraints of 10−8 bit error rate 99% of the time, which translates into a packet error rate of 10−4, except for a few days a year. Wi-Fi links, on the other hand, have quite different characteristics. While the delay introduced by the MAC level is in the order of the milliseconds, and is consequently too small to affect most applications, its packet loss characteristics are generally far from negligible. In fact, multipath fading, interference and collisions affect most environments, causing correlated packet losses: this means that often more than one packet at a time is lost for a single fading even

    ATM virtual connection performance modeling

    Get PDF

    A study of self-similar traffic generation for ATM networks

    Get PDF
    This thesis discusses the efficient and accurate generation of self-similar traffic for ATM networks. ATM networks have been developed to carry multiple service categories. Since the traffic on a number of existing networks is bursty, much research focuses on how to capture the characteristics of traffic to reduce the impact of burstiness. Conventional traffic models do not represent the characteristics of burstiness well, but self-similar traffic models provide a closer approximation. Self-similar traffic models have two fundamental properties, long-range dependence and infinite variance, which have been found in a large number of measurements of real traffic. Therefore, generation of self-similar traffic is vital for the accurate simulation of ATM networks. The main starting point for self-similar traffic generation is the production of fractional Brownian motion (FBM) or fractional Gaussian noise (FGN). In this thesis six algorithms are brought together so that their efficiency and accuracy can be assessed. It is shown that the discrete FGN (dPGN) algorithm and the Weierstrass-Mandelbrot (WM) function are the best in terms of accuracy while the random midpoint displacement (RMD) algorithm, successive random addition (SRA) algorithm, and the WM function are superior in terms of efficiency. Three hybrid approaches are suggested to overcome the inefficiency or inaccuracy of the six algorithms. The combination of the dFGN and RMD algorithm was found to be the best in that it can generate accurate samples efficiently and on-the-fly. After generating FBM sample traces, a further transformation needs to be conducted with either the marginal distribution model or the storage model to produce self-similar traffic. The storage model is a better transformation because it provides a more rigorous mathematical derivation and interpretation of physical meaning. The suitability of using selected Hurst estimators, the rescaled adjusted range (R/S) statistic, the variance-time (VT) plot, and Whittle's approximate maximum likelihood estimator (MLE), is also covered. Whittle's MLE is the better estimator, the R/S statistic can only be used as a reference, and the VT plot might misrepresent the actual Hurst value. An improved method for the generation of self-similar traces and their conversion to traffic has been proposed. This, combined with the identification of reliable methods for the estimators of the Hurst parameter, significantly advances the use of self-similar traffic models in ATM network simulation

    Unreliable Retrial Queues in a Random Environment

    Get PDF
    This dissertation investigates stability conditions and approximate steady-state performance measures for unreliable, single-server retrial queues operating in a randomly evolving environment. In such systems, arriving customers that find the server busy or failed join a retrial queue from which they attempt to regain access to the server at random intervals. Such models are useful for the performance evaluation of communications and computer networks which are characterized by time-varying arrival, service and failure rates. To model this time-varying behavior, we study systems whose parameters are modulated by a finite Markov process. Two distinct cases are analyzed. The first considers systems with Markov-modulated arrival, service, retrial, failure and repair rates assuming all interevent and service times are exponentially distributed. The joint process of the orbit size, environment state, and server status is shown to be a tri-layered, level-dependent quasi-birth-and-death (LDQBD) process, and we provide a necessary and sufficient condition for the positive recurrence of LDQBDs using classical techniques. Moreover, we apply efficient numerical algorithms, designed to exploit the matrix-geometric structure of the model, to compute the approximate steady-state orbit size distribution and mean congestion and delay measures. The second case assumes that customers bring generally distributed service requirements while all other processes are identical to the first case. We show that the joint process of orbit size, environment state and server status is a level-dependent, M/G/1-type stochastic process. By employing regenerative theory, and exploiting the M/G/1-type structure, we derive a necessary and sufficient condition for stability of the system. Finally, for the exponential model, we illustrate how the main results may be used to simultaneously select mean time customers spend in orbit, subject to bound and stability constraints

    Stochastic performance analysis of Network Function Virtualisation in future internet

    Get PDF
    This is the author accepted manuscript. The final version is available from the publisher via the DOI in this recordIEEE Network Function Virtualisation (NFV) has been considered as a promising technology for future Internet to increase network flexibility, accelerate service innovation and reduce the Capital Expenditures (CAPEX) and Operational Expenditures (OPEX) costs, through migrating network functions from dedicated network devices to commodity hardware. Recent studies reveal that although this migration of network function brings the network operation unprecedented flexibility and controllability, NFV-based architecture suffers from serious performance degradation compared with traditional service provisioning on dedicated devices. In order to achieve a comprehensive understanding of the service provisioning capability of NFV, this paper proposes a novel analytical model based on Stochastic Network Calculus (SNC) to quantitatively investigate the end-to-end performance bound of NFV networks. To capture the dynamic and on-demand NFV features, both the non-bursty traffic, e.g. Poisson process, and the bursty traffic, e.g. Markov Modulated Poisson Process (MMPP), are jointly considered in the developed model to characterise the arriving traffic. To address the challenges of resource competition and end-to-end NFV chaining, the property of convolution associativity and leftover service technologies of SNC are exploited to calculate the available resources of Virtual Network Function (VNF) nodes in the presence of multiple competing traffic, and transfer the complex NFV chain into an equivalent system for performance derivation and analysis. Both the numerical analysis and extensive simulation experiments are conducted to validate the accuracy of the proposed analytical model. Results demonstrate that the analytical performance metrics match well with those obtained from the simulation experiments and numerical analysis. In addition, the developed model is used as a practical and cost-effective tool to investigate the strategies of the service chain design and resource allocations in NFV networks.Engineering and Physical Sciences Research Council (EPSRC

    An Adaptive Scheme for Admission Control in ATM Networks

    Get PDF
    This paper presents a real time front-end admission control scheme for ATM networks. A call management scheme which uses the burstiness associated with traffic sources in a heterogeneous ATM environment to effect dynamic assignment of bandwidth is presented. In the proposed scheme, call acceptance is based on an on-line evaluation of the upper bound on cell loss probability which is derived from the estimated distribution of the number of calls arriving. Using this scheme, the negotiated quality of service will be assured when there is no estimation error. The control mechanism is effective when the number of calls is large, and tolerates loose bandwidth enforcement and loose policing control. The proposed approach is very effective in the connection oriented transport of ATM networks where the decision to admit new traffic is based on thea priori knowledge of the state of the route taken by the traffic
    • …
    corecore