77 research outputs found

    On-board B-ISDN fast packet switching architectures. Phase 1: Study

    Get PDF
    The broadband integrate services digital network (B-ISDN) is an emerging telecommunications technology that will meet most of the telecommunications networking needs in the mid-1990's to early next century. The satellite-based system is well positioned for providing B-ISDN service with its inherent capabilities of point-to-multipoint and broadcast transmission, virtually unlimited connectivity between any two points within a beam coverage, short deployment time of communications facility, flexible and dynamic reallocation of space segment capacity, and distance insensitive cost. On-board processing satellites, particularly in a multiple spot beam environment, will provide enhanced connectivity, better performance, optimized access and transmission link design, and lower user service cost. The following are described: the user and network aspects of broadband services; the current development status in broadband services; various satellite network architectures including system design issues; and various fast packet switch architectures and their detail designs

    From burstiness characterisation to traffic control strategy : a unified approach to integrated broadbank networks

    Full text link
    The major challenge in the design of an integrated network is the integration and support of a wide variety of applications. To provide the requested performance guarantees, a traffic control strategy has to allocate network resources according to the characteristics of input traffic. Specifically, the definition of traffic characterisation is significant in network conception. In this thesis, a traffic stream is characterised based on a virtual queue principle. This approach provides the necessary link between network resources allocation and traffic control. It is difficult to guarantee performance without prior knowledge of the worst behaviour in statistical multiplexing. Accordingly, we investigate the worst case scenarios in a statistical multiplexer. We evaluate the upper bounds on the probabilities of buffer overflow in a multiplexer, and data loss of an input stream. It is found that in networks without traffic control, simply controlling the utilisation of a multiplexer does not improve the ability to guarantee performance. Instead, the availability of buffer capacity and the degree of correlation among the input traffic dominate the effect on the performance of loss. The leaky bucket mechanism has been proposed to prevent ATM networks from performance degradation due to congestion. We study the leaky bucket mechanism as a regulation element that protects an input stream. We evaluate the optimal parameter settings and analyse the worst case performance. To investigate its effectiveness, we analyse the delay performance of a leaky bucket regulated multiplexer. Numerical results show that the leaky bucket mechanism can provide well-behaved traffic with guaranteed delay bound in the presence of misbehaving traffic. Using the leaky bucket mechanism, a general strategy based on burstiness characterisation, called the LB-Dynamic policy, is developed for packet scheduling. This traffic control strategy is closely related to the allocation of both bandwidth and buffer in each switching node. In addition, the LB-Dynamic policy monitors the allocated network resources and guarantees the network performance of each established connection, irrespective of the traffic intensity and arrival patterns of incoming packets. Simulation studies demonstrate that the LB-Dynamic policy is able to provide the requested service quality for heterogeneous traffic in integrated broadband networks

    A hybrid queueing model for fast broadband networking simulation

    Get PDF
    PhDThis research focuses on the investigation of a fast simulation method for broadband telecommunication networks, such as ATM networks and IP networks. As a result of this research, a hybrid simulation model is proposed, which combines the analytical modelling and event-driven simulation modelling to speeding up the overall simulation. The division between foreground and background traffic and the way of dealing with these different types of traffic to achieve improvement in simulation time is the major contribution reported in this thesis. Background traffic is present to ensure that proper buffering behaviour is included during the course of the simulation experiments, but only the foreground traffic of interest is simulated, unlike traditional simulation techniques. Foreground and background traffic are dealt with in a different way. To avoid the need for extra events on the event list, and the processing overhead, associated with the background traffic, the novel technique investigated in this research is to remove the background traffic completely, adjusting the service time of the queues for the background traffic to compensate (in most cases, the service time for the foreground traffic will increase). By removing the background traffic from the event-driven simulator the number of cell processing events dealt with is reduced drastically. Validation of this approach shows that, overall, the method works well, but the simulation using this method does have some differences compared with experimental results on a testbed. The reason for this is mainly because of the assumptions behind the analytical model that make the modelling tractable. Hence, the analytical model needs to be adjusted. This is done by having a neural network trained to learn the relationship between the input traffic parameters and the output difference between the proposed model and the testbed. Following this training, simulations can be run using the output of the neural network to adjust the analytical model for those particular traffic conditions. The approach is applied to cell scale and burst scale queueing to simulate an ATM switch, and it is also used to simulate an IP router. In all the applications, the method ensures a fast simulation as well as an accurate result

    Transport Architectures for an Evolving Internet

    Get PDF
    In the Internet architecture, transport protocols are the glue between an application’s needs and the network’s abilities. But as the Internet has evolved over the last 30 years, the implicit assumptions of these protocols have held less and less well. This can cause poor performance on newer networks—cellular networks, datacenters—and makes it challenging to roll out networking technologies that break markedly with the past. Working with collaborators at MIT, I have built two systems that explore an objective-driven, computer-generated approach to protocol design. My thesis is that making protocols a function of stated assumptions and objectives can improve application performance and free network technologies to evolve. Sprout, a transport protocol designed for videoconferencing over cellular networks, uses probabilistic inference to forecast network congestion in advance. On commercial cellular networks, Sprout gives 2-to-4 times the throughput and 7-to-9 times less delay than Skype, Apple Facetime, and Google Hangouts. This work led to Remy, a tool that programmatically generates protocols for an uncertain multi-agent network. Remy’s computer-generated algorithms can achieve higher performance and greater fairness than some sophisticated human-designed schemes, including ones that put intelligence inside the network. The Remy tool can then be used to probe the difficulty of the congestion control problem itself—how easy is it to “learn” a network protocol to achieve desired goals, given a necessarily imperfect model of the networks where it ultimately will be deployed? We found weak evidence of a tradeoff between the breadth of the operating range of a computer-generated protocol and its performance, but also that a single computer-generated protocol was able to outperform existing schemes over a thousand-fold range of link rates

    A study of self-similar traffic generation for ATM networks

    Get PDF
    This thesis discusses the efficient and accurate generation of self-similar traffic for ATM networks. ATM networks have been developed to carry multiple service categories. Since the traffic on a number of existing networks is bursty, much research focuses on how to capture the characteristics of traffic to reduce the impact of burstiness. Conventional traffic models do not represent the characteristics of burstiness well, but self-similar traffic models provide a closer approximation. Self-similar traffic models have two fundamental properties, long-range dependence and infinite variance, which have been found in a large number of measurements of real traffic. Therefore, generation of self-similar traffic is vital for the accurate simulation of ATM networks. The main starting point for self-similar traffic generation is the production of fractional Brownian motion (FBM) or fractional Gaussian noise (FGN). In this thesis six algorithms are brought together so that their efficiency and accuracy can be assessed. It is shown that the discrete FGN (dPGN) algorithm and the Weierstrass-Mandelbrot (WM) function are the best in terms of accuracy while the random midpoint displacement (RMD) algorithm, successive random addition (SRA) algorithm, and the WM function are superior in terms of efficiency. Three hybrid approaches are suggested to overcome the inefficiency or inaccuracy of the six algorithms. The combination of the dFGN and RMD algorithm was found to be the best in that it can generate accurate samples efficiently and on-the-fly. After generating FBM sample traces, a further transformation needs to be conducted with either the marginal distribution model or the storage model to produce self-similar traffic. The storage model is a better transformation because it provides a more rigorous mathematical derivation and interpretation of physical meaning. The suitability of using selected Hurst estimators, the rescaled adjusted range (R/S) statistic, the variance-time (VT) plot, and Whittle's approximate maximum likelihood estimator (MLE), is also covered. Whittle's MLE is the better estimator, the R/S statistic can only be used as a reference, and the VT plot might misrepresent the actual Hurst value. An improved method for the generation of self-similar traces and their conversion to traffic has been proposed. This, combined with the identification of reliable methods for the estimators of the Hurst parameter, significantly advances the use of self-similar traffic models in ATM network simulation

    Medium access control mechanisms for high speed metropolitan area networks

    Get PDF
    In this dissertation novel Medium Access Control mechanisms for High Speed Metropolitan Area networks are proposed and their performance is investigated under the presence of single and multiple priority classes of traffic. The proposed mechanisms are based on the Distributed Queue Dual Bus network, which has been adopted by the IEEE standardization committee as the 802.6 standard for Metropolitan Area Networks, and address most of its performance limitations. First, the Rotating Slot Generator scheme is introduced which uses the looped bus architecture that has been proposed for the 802.6 network. According to this scheme the responsibility for generating slots moves periodically from station to station around the loop. In this way, the positions of the stations relative to the slot generator change continuously, and therefore, there are no favorable locations on the busses. Then, two variations of a new bandwidth balancing mechanism, the NSW_BWB and ITU_NSW are introduced. Their main advantage is that their operation does not require the wastage of channel slots and for this reason they can converge very fast to the steady state, where the fair bandwidth allocation is achieved. Their performance and their ability to support multiple priority classes of traffic are thoroughly investigated. Analytic estimates for the stations\u27 throughputs and average segment delays are provided. Moreover, a novel, very effective priority mechanism is introduced which can guarantee almost immediate access for high priority traffic, regardless of the presence of lower priority traffic. Its performance is thoroughly investigated and its ability to support real time traffic, such as voice and video, is demonstrated. Finally, the performance under the presence of erasure nodes of the various mechanisms that have been proposed in this dissertation is examined and compared to the corresponding performance of the most prominent existing mechanisms

    Teletraffic engineering and network planning

    Get PDF
    corecore