198 research outputs found

    Estimation of the parameters of token-buckets in multi-hop environments

    Full text link
    Bandwidth verification in shaping scenarios receives much attention of both operators and clients because of its impact on Quality of Service (QoS). As a result, measuring shapers’ parameters, namely the Committed Information Rate (CIR), Peak Information Rate (PIR) and Maximum Burst Size (MBS), is a relevant issue when it comes to assess QoS. In this paper, we present a novel algorithm, TBCheck, which serves to accurately measure such parameters with minimal intrusiveness. These measurements are the cornerstone for the validation of Service Level Agreements (SLA) with multiple shaping elements along an end-to-end path. As a further outcome of this measurement method, we define a formal taxonomy of multi-hop shaping scenarios. A thorough performance evaluation covering the latter taxonomy shows the advantages of TBCheck compared to other tools in the state of the art, yielding more accurate results even in the presence of cross-traffic. Additionally, our findings show that MBS estimation is unfeasible when the link load is high, regardless the measurement technique, because the token-bucket will always be empty. Consequently, we propose an estimation policy which maximizes the accuracy by measuring CIR during busy hours and PIR and MBS during off-peak hoursThis work was partially supported by the Spanish Ministry of Economy and Competitiveness and the European Regional Development Fund under the project Tráfica (MINECO/FEDER TEC2015-69417-C2-1-R

    Low-Latency Hard Real-Time Communication over Switched Ethernet

    Get PDF
    With the upsurge in the demand for high-bandwidth networked real-time applications in cost-sensitive environments, a key issue is to take advantage of developments of commodity components that offer a multiple of the throughput of classical real-time solutions. It was the starting hypothesis of this dissertation that with fine grained traffic shaping as the only means of node cooperation, it should be possible to achieve lower guaranteed delays and higher bandwidth utilization than with traditional approaches, even though Switched Ethernet does not support policing in the switches as other network architectures do. This thesis presents the application of traffic shaping to Switched Ethernet and validates the hypothesis. It shows, both theoretically and practically, how commodity Switched Ethernet technology can be used for low-latency hard real-time communication, and what operating-system support is needed for an efficient implementation

    From burstiness characterisation to traffic control strategy : a unified approach to integrated broadbank networks

    Full text link
    The major challenge in the design of an integrated network is the integration and support of a wide variety of applications. To provide the requested performance guarantees, a traffic control strategy has to allocate network resources according to the characteristics of input traffic. Specifically, the definition of traffic characterisation is significant in network conception. In this thesis, a traffic stream is characterised based on a virtual queue principle. This approach provides the necessary link between network resources allocation and traffic control. It is difficult to guarantee performance without prior knowledge of the worst behaviour in statistical multiplexing. Accordingly, we investigate the worst case scenarios in a statistical multiplexer. We evaluate the upper bounds on the probabilities of buffer overflow in a multiplexer, and data loss of an input stream. It is found that in networks without traffic control, simply controlling the utilisation of a multiplexer does not improve the ability to guarantee performance. Instead, the availability of buffer capacity and the degree of correlation among the input traffic dominate the effect on the performance of loss. The leaky bucket mechanism has been proposed to prevent ATM networks from performance degradation due to congestion. We study the leaky bucket mechanism as a regulation element that protects an input stream. We evaluate the optimal parameter settings and analyse the worst case performance. To investigate its effectiveness, we analyse the delay performance of a leaky bucket regulated multiplexer. Numerical results show that the leaky bucket mechanism can provide well-behaved traffic with guaranteed delay bound in the presence of misbehaving traffic. Using the leaky bucket mechanism, a general strategy based on burstiness characterisation, called the LB-Dynamic policy, is developed for packet scheduling. This traffic control strategy is closely related to the allocation of both bandwidth and buffer in each switching node. In addition, the LB-Dynamic policy monitors the allocated network resources and guarantees the network performance of each established connection, irrespective of the traffic intensity and arrival patterns of incoming packets. Simulation studies demonstrate that the LB-Dynamic policy is able to provide the requested service quality for heterogeneous traffic in integrated broadband networks

    Telecommunications Networks

    Get PDF
    This book guides readers through the basics of rapidly emerging networks to more advanced concepts and future expectations of Telecommunications Networks. It identifies and examines the most pressing research issues in Telecommunications and it contains chapters written by leading researchers, academics and industry professionals. Telecommunications Networks - Current Status and Future Trends covers surveys of recent publications that investigate key areas of interest such as: IMS, eTOM, 3G/4G, optimization problems, modeling, simulation, quality of service, etc. This book, that is suitable for both PhD and master students, is organized into six sections: New Generation Networks, Quality of Services, Sensor Networks, Telecommunications, Traffic Engineering and Routing

    Quality of service modeling and analysis for carrier ethernet

    Get PDF
    Today, Ethernet is moving into the mainstream evolving into a carrier grade technology. Termed as Carrier Ethernet it is expected to overcome most of the\ud shortcomings of native Ethernet. It is envisioned to carry services end-to-end serving corporate data networking and broadband access demands as well as backhauling wireless traffic. As the penetration of Ethernet increases, the offered Quality of Service (QoS) will become increasingly important and a distinguishing factor between different service providers. The challenge is to meet the QoS requirements of end applications such as response times, throughput, delay and jitter by managing the network resources at hand. Since Ethernet was not designed to operate in large public networks it does not possess functionalities to address this issue. In this thesis we propose and analyze mechanisms which improve the QoS performance of Ethernet enabling it to meet the demands of the current and next generation services and applications.\u

    Quality-of-service management in IP networks

    Get PDF
    Quality of Service (QoS) in Internet Protocol (IF) Networks has been the subject of active research over the past two decades. Integrated Services (IntServ) and Differentiated Services (DiffServ) QoS architectures have emerged as proposed standards for resource allocation in IF Networks. These two QoS architectures support the need for multiple traffic queuing systems to allow for resource partitioning for heterogeneous applications making use of the networks. There have been a number of specifications or proposals for the number of traffic queuing classes (Class of Service (CoS)) that will support integrated services in IF Networks, but none has provided verification in the form of analytical or empirical investigation to prove that its specification or proposal will be optimum. Despite the existence of the two standard QoS architectures and the large volume of research work that has been carried out on IF QoS, its deployment still remains elusive in the Internet. This is not unconnected with the complexities associated with some aspects of the standard QoS architectures. [Continues.

    Network environment for testing peer-to-peer streaming applications

    Get PDF
    Peer-to-Peer (P2P) streaming applications are an emerging trend in content distribution. A reliable network environment was needed to test their capabilities and performance limits, which this thesis focused on. Furthermore, some experimental tests in the environment were performed with an application implemented in the Department of Communications Engineering (DCE) at Tampere University of Technology. For practical reasons, the testing environment was assembled in a teaching laboratory at DCE premises. The environment was built using a centralized architecture, where a Linux emulation node, WANemulator, generates realistic packet losses, delays, and jitters to the network. After an extensive literature survey an extension to the Iproute2’s Tc utility, NetEm, was chosen to be responsible of the network link emulation at the WANemulator. The peers are run inside VirtualBox images, which are used at the Linux computers to keep the laboratory still suitable for teaching purposes. In addition to the network emulation, Linux traffic controlling mechanisms were used both at the WANemulator and VirtualBox’s virtual machines to limit the traffic rates of the peers. When used together, emulation and rate limitation resemble to the statistical behaviour of the Internet quite closely. Virtualization overhead limited the maximum number of Virtual Machines (VMs) at each laboratory computer into two. Also, a peculiar feature in VirtualBox’s bridge implementation reduced the network capabilities of the VMs. However, the bottleneck in the environment is the centralized architecture, where all of the traffic is routed through the WANemulator. The environment was tested reliable with the chosen streamed content and 160 peers, but by tuning the parameters in WANemulator bigger overlays might be achievable. In addition, a distributed emulation should be possible with the environment, but it was not tested. The results from the experimental tests performed with the P2P streaming application proved the application to be functional in an environment that has mobile network conditions. The designed network environment is tested to work reliably, it enables reasonable scalability and provides better possibility to emulate the networking characteristics of the Internet, when compared to an ordinary local area network environment. /Kir1

    End to End Inter-domain Quality of Service Provisioning

    Get PDF
    corecore