53 research outputs found

    On-board congestion control for satellite packet switching networks

    Get PDF
    It is desirable to incorporate packet switching capability on-board for future communication satellites. Because of the statistical nature of packet communication, incoming traffic fluctuates and may cause congestion. Thus, it is necessary to incorporate a congestion control mechanism as part of the on-board processing to smooth and regulate the bursty traffic. Although there are extensive studies on congestion control for both baseband and broadband terrestrial networks, these schemes are not feasible for space based switching networks because of the unique characteristics of satellite link. Here, we propose a new congestion control method for on-board satellite packet switching. This scheme takes into consideration the long propagation delay in satellite link and takes advantage of the the satellite's broadcasting capability. It divides the control between the ground terminals and satellite, but distributes the primary responsibility to ground terminals and only requires minimal hardware resource on-board satellite

    Dynamic bandwidth scheduling and burst construction algorithm for downlink in (4G) mobile WiMAX networks

    Get PDF
    Advanced wireless systems, also called fourth generation (4G) wireless systems, such as Mobile Worldwide interoperability for Microwave Access (WiMAX), are developed to provide broadband wireless access in true sense. Therefore, it becomes mandatory for such kind of systems to provide Quality of Service (QoS) support for wide range of applications. In such types of systems, wireless base stations are responsible for distributing proper amount of bandwidth among different mobile users, thus satisfying a user’s QoS requirements. The task of distributing proper amount of bandwidth rests upon a scheduling algorithm, typically executed at the base station. 2G and 3G wireless systems are able to provide only voice, low data rate, and delay insensitive services, such as Web browsing. This is due to the lack of development in digital modulation and multiple access schemes, which are two major aspects of physical layer of these systems. Digital modulation is used to combat with location-dependent channel errors which get introduced in the data transmitted by base station on a wireless channel to a mobile station. Hence, different locations of every mobile station in a cell coverage area require different modulation and coding schemes for error-free transmission. Link adaptation is a technique that makes the use of variable modulation and coding schemes possible, according to varying location of mobile stations. This technique is used by 4G systems to achieve error free transmissions. 2G and 3G systems are not capable of achieving error-free transmissions in many cases due to significantly fewer or no choice of modulation and coding schemes for different locations of mobile stations. In such cases, most of the time, wireless channel is either error-prone or error-free for mobile station. Scheduling algorithms developed for 2G and 3G systems focussed on providing long term average rate requirements of users, which are satisfied at the expense of zero transmission for mobile users experiencing bad or error prone channel. This approach was adopted to achieve efficient use of wireless channel capacity. This was the best approach adopted by majority of scheduling algorithms because delay sensitive applications were not supported in such systems and hence bounded delay was not a matter of concern. Hence, the majority of the algorithms focussed on providing long term average rate requirements while maximizing cell throughput. This helped in making efficient use of wireless channel capacity at the expense of zero transmission for mobile users experiencing bad channel and compromising delay performance. These approaches, however, will not be suitable for 4G systems as such systems support wide range of applications ranging from delay-insensitive to highly delay-sensitive. Hence in this thesis, a dynamic bandwidth scheduling algorithm called Leaky Bucket Token Bank (LBTB) is proposed. This algorithm exploits some advanced features of 4G systems, like link adaptation and multiple access scheme, to achieve long term average rate requirements for delay-insensitive applications and bounded delay for delay-sensitive applications. Advanced features of 4G systems also bring more challenges. One such challenge is Orthogonal Frequency Division Multiple Access (OFDMA), a multiple access scheme deployed in 4G systems. In OFDMA, scheduled data for different mobile stations is packed into bursts and mapped to a two dimensional structure of time and frequency, called OFDMA frame. It has been observed that the way bursts are mapped to OFDMA frame affects the wakeup time of mobile stations receiving data and therefore causes power consumption. Wakeup time is the time duration in OFDMA frame for which the mobile station becomes active. Since OFDMA frame is a limited and precious radio resource, the efficient use of such radio resource is necessary. Efficient use requires that the wastage of such radio resource be minimized. Hence in this thesis, a burst construction algorithm called Burst Construction for Fairness in Power (BCFP) is also proposed. The algorithm attempts to achieve fairness in power consumption of different mobile stations by affecting their wakeup time. It also attempts to minimize wastage of radio resources. For comparing the performance of joint proposed algorithms (LBTB+BCFP), the proposed burst construction algorithm (BCFP) is joined to the two other existing scheduling algorithms namely: Token Bank Fair Queuing (TBFQ) and Adaptive Token Bank Fair Queuing (ATBFQ). TBFQ is an algorithm developed for 3G wireless networks whereas, ATBFQ is an extension to the TBFQ and is developed for 4G wireless networks. Therefore, the performance of the proposed algorithms jointly together (LBTB+BCFP) is compared with the joint TBFQ and proposed burst construction algorithm (TBFQ+BCFP), as well as joint ATBFQ and proposed burst construction algorithm (ATBFQ+BCFP). We compare the performance in terms of average queuing delay, average cell throughput, packet loss, fairness among different mobile users, fairness in average wakeup times (average power consumption), and fraction of radio resources wasted. The performance of proposed burst construction algorithm (BCFP) is also compared with Round Robin algorithm in terms of fairness in average power consumption as well as fraction of radio resources wasted, for varying number of users

    Performance modeling and control of web servers

    Get PDF
    This thesis deals with the task of modeling a web server and designing a mechanism that can prevent the web server from being overloaded. Four papers are presented. The first paper gives an M/G/1/K processor sharing model of a single web server. The model is validated against measurements ands imulations on the commonly usedw eb server Apache. A description is given on how to calculate the necessary parameters in the model. The second paper introduces an admission control mechanism for the Apache web server basedon a combination of queuing theory andcon trol theory. The admission control mechanism is tested in the laboratory, implemented as a stand-alone application in front of the web server. The third paper continues the work from the secondp aper by discussing stability. This time, the admission control mechanism is implemented as a module within the Apache source code. Experiments show the stability and settling time of the controller. Finally, the fourth paper investigates the concept of service level agreements for a web site. The agreements allow a maximum response time anda minimal throughput to be set. The requests are sorted into classes, where each class is assigneda weight (representing the income for the web site owner). Then an optimization algorithm is appliedso that the total profit for the web site during overload is maximized

    Congestion detection within multi-service TCP/IP networks using wavelets.

    Get PDF
    Using passive observation within the multi-service TCP/IP networking domain, we have developed a methodology that associates the frequency composition of composite traffic signals with the packet transmission mechanisms of TCP. At the core of our design is the Discrete Wavelet Transform (DWT), used to temporally localise the frequency variations of a signal. Our design exploits transmission mechanisms (including Fast Retransmit/Fast Recovery, Congestion Avoidance, Slow start, and Retransmission Timer Expiry with Exponential Back off.) that are activated in response to changes within this type of network environment. Manipulation of DWT output, combined with the use of novel heuristics permits shifts in the frequency spectrum of composite traffic signals to be directly associated with the former. Our methodology can be adapted to accommodate composite traffic signals that contain a substantial proportion of data originating from non-rate adaptive sources often associated with Long Range Dependence and Self Similarity (e.g. Pareto sources). We demonstrate the methodology in two ways. Firstly, it is used to design a congestion indicator tool that can operate with network control mechanisms that dissipate congestion. Secondly, using a queue management algorithm (Random Early Detection) as a candidate protocol, we show how our methodology can be adapted to produce a performance-monitoring tool. Our approach provides a solution that has both low operational and implementation intrusiveness with respect to existing network infrastructure. The methodology requires a single parameter (i.e. the arrival rate of traffic at a network node), which can be extracted from almost all network-forwarding devices. This simplifies implementation. Our study was performed within the context of fault management with design requirements and constraints arising from an in depth study of the Fault Management Systems (FMS) used by British Telecomm on regional UK networks up to February 2000

    Implementation of charging schemes to transport and service level ATM networks

    Get PDF
    Nowadays, telecommunications networks like telephony networks, computer networks, and packet switched networks are all dedicated to only one or just a few types of services When a user wants to subscribe to various telecommunications services, he needs to be connected to different types of networks, which raises the cost of connection, and reduces the efficiency of the utilisation of the network

    Designing new network adaptation and ATM adaptation layers for interactive multimedia applications

    Get PDF
    Multimedia services, audiovisual applications composed of a combination of discrete and continuous data streams, will be a major part of the traffic flowing in the next generation of high speed networks. The cornerstones for multimedia are Asynchronous Transfer Mode (ATM) foreseen as the technology for the future Broadband Integrated Services Digital Network (B-ISDN) and audio and video compression algorithms such as MPEG-2 that reduce applications bandwidth requirements. Powerful desktop computers available today can integrate seamlessly the network access and the applications and thus bring the new multimedia services to home and business users. Among these services, those based on multipoint capabilities are expected to play a major role.    Interactive multimedia applications unlike traditional data transfer applications have stringent simultaneous requirements in terms of loss and delay jitter due to the nature of audiovisual information. In addition, such stream-based applications deliver data at a variable rate, in particular if a constant quality is required.    ATM, is able to integrate traffic of different nature within a single network creating interactions of different types that translate into delay jitter and loss. Traditional protocol layers do not have the appropriate mechanisms to provide the required network quality of service (QoS) for such interactive variable bit rate (VBR) multimedia multipoint applications. This lack of functionalities calls for the design of protocol layers with the appropriate functions to handle the stringent requirements of multimedia.    This thesis contributes to the solution of this problem by proposing new Network Adaptation and ATM Adaptation Layers for interactive VBR multimedia multipoint services.    The foundations to build these new multimedia protocol layers are twofold; the requirements of real-time multimedia applications and the nature of compressed audiovisual data.    On this basis, we present a set of design principles we consider as mandatory for a generic Multimedia AAL capable of handling interactive VBR multimedia applications in point-to-point as well as multicast environments. These design principles are then used as a foundation to derive a first set of functions for the MAAL, namely; cell loss detection via sequence numbering, packet delineation, dummy cell insertion and cell loss correction via RSE FEC techniques.    The proposed functions, partly based on some theoretical studies, are implemented and evaluated in a simulated environment. Performances are evaluated from the network point of view using classic metrics such as cell and packet loss. We also study the behavior of the cell loss process in order to evaluate the efficiency to be expected from the proposed cell loss correction method. We also discuss the difficulties to map network QoS parameters to user QoS parameters for multimedia applications and especially for video information. In order to present a complete performance evaluation that is also meaningful to the end-user, we make use of the MPQM metric to map the obtained network performance results to a user level. We evaluate the impact that cell loss has onto video and also the improvements achieved with the MAAL.    All performance results are compared to an equivalent implementation based on AAL5, as specified by the current ITU-T and ATM Forum standards.    An AAL has to be by definition generic. But to fully exploit the functionalities of the AAL layer, it is necessary to have a protocol layer that will efficiently interface the network and the applications. This role is devoted to the Network Adaptation Layer.    The network adaptation layer (NAL) we propose, aims at efficiently interface the applications to the underlying network to achieve a reliable but low overhead transmission of video streams. Since this requires an a priori knowledge of the information structure to be transmitted, we propose the NAL to be codec specific.    The NAL targets interactive multimedia applications. These applications share a set of common requirements independent of the encoding scheme used. This calls for the definition of a set of design principles that should be shared by any NAL even if the implementation of the functions themselves is codec specific. On the basis of the design principles, we derive the common functions that NALs have to perform which are mainly two; the segmentation and reassembly of data packets and the selective data protection.    On this basis, we develop an MPEG-2 specific NAL. It provides a perceptual syntactic information protection, the PSIP, which results in an intelligent and minimum overhead protection of video information. The PSIP takes advantage of the hierarchical organization of the compressed video data, common to the majority of the compression algorithms, to perform a selective data protection based on the perceptual relevance of the syntactic information.    The transmission over the combined NAL-MAAL layers shows significant improvement in terms of CLR and perceptual quality compared to equivalent transmissions over AAL5 with the same overhead.    The usage of the MPQM as a performance metric, which is one of the main contributions of this thesis, leads to a very interesting observation. The experimental results show that for unexpectedly high CLRs, the average perceptual quality remains close to the original value. The economical potential of such an observation is very important. Given that the data flows are VBR, it is possible to improve network utilization by means of statistical multiplexing. It is therefore possible to reduce the cost per communication by increasing the number of connections with a minimal loss in quality.    This conclusion could not have been derived without the combined usage of perceptual and network QoS metrics, which have been able to unveil the economic potential of perceptually protected streams.    The proposed concepts are finally tested in a real environment where a proof-of-concept implementation of the MAAL has shown a behavior close to the simulated results therefore validating the proposed multimedia protocol layers

    A flexible, abstract network optimisation framework and its application to telecommunications network design and configuration problems

    Get PDF
    A flexible, generic network optimisation framework is described. The purpose of this framework is to reduce the effort required to solve particular network optimisation problems. The essential idea behind the framework is to develop a generic network optimisation problem to which many network optimisation problems can be mapped. A number of approaches to solve this generic problem can then be developed. To solve some specific network design or configuration problem the specific problem is mapped to the generic problem and one of the problem solvers is used to obtain a solution. This solution is then mapped back to the specific problem domain. Using the framework in this way, a network optimisation problem can be solved using less effort than modelling the problem and developing some algorithm to solve the model. The use of the framework is illustrated in two separate problems: design of an enterprise network to accommodate voice and data traffic and configuration of a core diffserv/MPLS network. In both cases, the framework enabled solutions to be found with less effort than would be required if a more direct approach was used

    Novel techniques in large scaleable ATM switches

    Get PDF
    Bibliography: p. 172-178.This dissertation explores the research area of large scale ATM switches. The requirements for an ATM switch are determined by overviewing the ATM network architecture. These requirements lead to the discussion of an abstract ATM switch which illustrates the components of an ATM switch that automatically scale with increasing switch size (the Input Modules and Output Modules) and those that do not (the Connection Admission Control and Switch Management systems as well as the Cell Switch Fabric). An architecture is suggested which may result in a scalable Switch Management and Connection Admission Control function. However, the main thrust of the dissertation is confined to the cell switch fabric. The fundamental mathematical limits of ATM switches and buffer placement is presented next emphasising the desirability of output buffering. This is followed by an overview of the possible routing strategies in a multi-stage interconnection network. A variety of space division switches are then considered which leads to a discussion of the hypercube fabric, (a novel switching technique). The hypercube fabric achieves good performance with an O(N.log₂N)²) scaling. The output module, resequencing, cell scheduling and output buffering technique is presented leading to a complete description of the proposed ATM switch. Various traffic models are used to quantify the switch's performance. These include a simple exponential inter-arrival time model, a locality of reference model and a self-similar, bursty, multiplexed Variable Bit Rate (VBR) model. FIFO queueing is simple to implement in an ATNI switch, however, more responsive queueing strategies can result in an improved performance. An associative memory is presented which allows the separate queues in the ATM switch to be effectively logically combined into a single FIFO queue. The associative memory is described in detail and its feasibility is shown by laying out the Integrated Circuit masks and performing an analogue simulation of the IC's performance is SPICE3. Although optimisations were required to the original design, the feasibility of the approach is shown with a 15Ƞs write time and a 160Ƞs read time for a 32 row, 8 priority bit, 10 routing bit version of the memory. This is achieved with 2µm technology, more advanced technologies may result in even better performance. The various traffic models and switch models are simulated in a number of runs. This shows the performance of the hypercube which outperforms a Clos network of equivalent technology and approaches the performance of an ideal reference fabric. The associative memory leverages a significant performance advantage in the hypercube network and a modest advantage in the Clos network. The performance of the switches is shown to degrade with increasing traffic density, increasing locality of reference, increasing variance in the cell rate and increasing burst length. Interestingly, the fabrics show no real degradation in response to increasing self similarity in the fabric. Lastly, the appendices present suggestions on how redundancy, reliability and multicasting can be achieved in the hypercube fabric. An overview of integrated circuits is provided. A brief description of commercial ATM switching products is given. Lastly, a road map to the simulation code is provided in the form of descriptions of the functionality found in all of the files within the source tree. This is intended to provide the starting ground for anyone wishing to modify or extend the simulation system developed for this thesis
    corecore