2,032 research outputs found

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco

    Prediction-based techniques for the optimization of mobile networks

    Get PDF
    Mención Internacional en el título de doctorMobile cellular networks are complex system whose behavior is characterized by the superposition of several random phenomena, most of which, related to human activities, such as mobility, communications and network usage. However, when observed in their totality, the many individual components merge into more deterministic patterns and trends start to be identifiable and predictable. In this thesis we analyze a recent branch of network optimization that is commonly referred to as anticipatory networking and that entails the combination of prediction solutions and network optimization schemes. The main intuition behind anticipatory networking is that knowing in advance what is going on in the network can help understanding potentially severe problems and mitigate their impact by applying solution when they are still in their initial states. Conversely, network forecast might also indicate a future improvement in the overall network condition (i.e. load reduction or better signal quality reported from users). In such a case, resources can be assigned more sparingly requiring users to rely on buffered information while waiting for the better condition when it will be more convenient to grant more resources. In the beginning of this thesis we will survey the current anticipatory networking panorama and the many prediction and optimization solutions proposed so far. In the main body of the work, we will propose our novel solutions to the problem, the tools and methodologies we designed to evaluate them and to perform a real world evaluation of our schemes. By the end of this work it will be clear that not only is anticipatory networking a very promising theoretical framework, but also that it is feasible and it can deliver substantial benefit to current and next generation mobile networks. In fact, with both our theoretical and practical results we show evidences that more than one third of the resources can be saved and even larger gain can be achieved for data rate enhancements.Programa Oficial de Doctorado en Ingeniería TelemáticaPresidente: Albert Banchs Roca.- Presidente: Pablo Serrano Yañez-Mingot.- Secretario: Jorge Ortín Gracia.- Vocal: Guevara Noubi

    Greediness control algorithm for multimedia streaming in wireless local area networks

    Get PDF
    This work investigates the interaction between the application and transport layers while streaming multimedia in a residential Wireless Local Area Network (WLAN). Inconsistencies have been identified that can have a severe impact on the Quality of Experience (QoE) experienced by end users. This problem arises as a result of the streaming processes reliance on rate adaptation engines based on congestion avoidance mechanisms, that try to obtain as much bandwidth as possible from the limited network resources. These upper transport layer mechanisms have no knowledge of the media which they are carrying and as a result treat all traffic equally. This lack of knowledge of the media carried and the characteristics of the target devices results in fair bandwidth distribution at the transport layer but creates unfairness at the application layer. This unfairness mostly affects user perceived quality when streaming high quality multimedia. Essentially, bandwidth that is distributed fairly between competing video streams at the transport layer results in unfair application layer video quality distribution. Therefore, there is a need to allow application layer streaming solutions, tune the aggressiveness of transport layer congestion control mechanisms, in order to create application layer QoE fairness between competing media streams, by taking their device characteristics into account. This thesis proposes the Greediness Control Algorithm (GCA), an upper transport layer mechanism that eliminates quality inconsistencies caused by rate / congestion control mechanisms while streaming multimedia in wireless networks. GCA extends an existing solution (i.e. TCP Friendly Rate Control (TFRC)) by introducing two parameters that allow the streaming application to tune the aggressiveness of the rate estimation and as a result, introduce fair distribution of quality at the application layer. The thesis shows that this rate adaptation technique, combined with a scalable video format allows increased overall system QoE. Extensive simulation analysis demonstrate that this form of rate adaptation increases the overall user QoE achieved via a number of devices operating within the same home WLAN

    Theories and Models for Internet Quality of Service

    Get PDF
    We survey recent advances in theories and models for Internet Quality of Service (QoS). We start with the theory of network calculus, which lays the foundation for support of deterministic performance guarantees in networks, and illustrate its applications to integrated services, differentiated services, and streaming media playback delays. We also present mechanisms and architecture for scalable support of guaranteed services in the Internet, based on the concept of a stateless core. Methods for scalable control operations are also briefly discussed. We then turn our attention to statistical performance guarantees, and describe several new probabilistic results that can be used for a statistical dimensioning of differentiated services. Lastly, we review recent proposals and results in supporting performance guarantees in a best effort context. These include models for elastic throughput guarantees based on TCP performance modeling, techniques for some quality of service differentiation without access control, and methods that allow an application to control the performance it receives, in the absence of network support

    Data-Driven Intelligent Scheduling For Long Running Workloads In Large-Scale Datacenters

    Get PDF
    Cloud computing is becoming a fundamental facility of society today. Large-scale public or private cloud datacenters spreading millions of servers, as a warehouse-scale computer, are supporting most business of Fortune-500 companies and serving billions of users around the world. Unfortunately, modern industry-wide average datacenter utilization is as low as 6% to 12%. Low utilization not only negatively impacts operational and capital components of cost efficiency, but also becomes the scaling bottleneck due to the limits of electricity delivered by nearby utility. It is critical and challenge to improve multi-resource efficiency for global datacenters. Additionally, with the great commercial success of diverse big data analytics services, enterprise datacenters are evolving to host heterogeneous computation workloads including online web services, batch processing, machine learning, streaming computing, interactive query and graph computation on shared clusters. Most of them are long-running workloads that leverage long-lived containers to execute tasks. We concluded datacenter resource scheduling works over last 15 years. Most previous works are designed to maximize the cluster efficiency for short-lived tasks in batch processing system like Hadoop. They are not suitable for modern long-running workloads of Microservices, Spark, Flink, Pregel, Storm or Tensorflow like systems. It is urgent to develop new effective scheduling and resource allocation approaches to improve efficiency in large-scale enterprise datacenters. In the dissertation, we are the first of works to define and identify the problems, challenges and scenarios of scheduling and resource management for diverse long-running workloads in modern datacenter. They rely on predictive scheduling techniques to perform reservation, auto-scaling, migration or rescheduling. It forces us to pursue and explore more intelligent scheduling techniques by adequate predictive knowledges. We innovatively specify what is intelligent scheduling, what abilities are necessary towards intelligent scheduling, how to leverage intelligent scheduling to transfer NP-hard online scheduling problems to resolvable offline scheduling issues. We designed and implemented an intelligent cloud datacenter scheduler, which automatically performs resource-to-performance modeling, predictive optimal reservation estimation, QoS (interference)-aware predictive scheduling to maximize resource efficiency of multi-dimensions (CPU, Memory, Network, Disk I/O), and strictly guarantee service level agreements (SLA) for long-running workloads. Finally, we introduced a large-scale co-location techniques of executing long-running and other workloads on the shared global datacenter infrastructure of Alibaba Group. It effectively improves cluster utilization from 10% to averagely 50%. It is far more complicated beyond scheduling that involves technique evolutions of IDC, network, physical datacenter topology, storage, server hardwares, operating systems and containerization. We demonstrate its effectiveness by analysis of newest Alibaba public cluster trace in 2017. We are the first of works to reveal the global view of scenarios, challenges and status in Alibaba large-scale global datacenters by data demonstration, including big promotion events like Double 11 . Data-driven intelligent scheduling methodologies and effective infrastructure co-location techniques are critical and necessary to pursue maximized multi-resource efficiency in modern large-scale datacenter, especially for long-running workloads

    Resource Allocation for Cellular/WLAN Integrated Networks

    Get PDF
    The next-generation wireless communications have been envisioned to be supported by heterogeneous networks using various wireless access technologies. The popular cellular networks and wireless local area networks (WLANs) present perfectly complementary characteristics in terms of service capacity, mobility support, and quality-of-service (QoS) provisioning. The cellular/WLAN interworking is thus an effective way to promote the evolution of wireless networks. As an essential aspect of the interworking, resource allocation is vital for efficient utilization of the overall resources. Specially, multi-service provisioning can be enhanced with cellular/WLAN interworking by taking advantage of the complementary network strength and an overlay structure. Call assignment/reassignment strategies and admission control policies are effective resource allocation mechanisms for the cellular/WLAN integrated network. Initially, the incoming calls are distributed to the overlay cell or WLAN according to call assignment strategies, which are enhanced with admission control policies in the target network. Further, call reassignment can be enabled to dynamically transfer the traffic load between the overlay cell and WLAN via vertical handoff. By these means, the multi-service traffic load can be properly shared between the interworked systems. In this thesis, we investigate the load sharing problem for this heterogeneous wireless overlay network. Three load sharing schemes with different call assignment/reassignment strategies and admission control policies are proposed and analyzed. Effective analytical models are developed to evaluate the QoS performance and determine the call admission and assignment parameters. First, an admission control scheme with service-differentiated call assignment is studied to gain insights on the effects of load sharing on interworking effectiveness. Then, the admission scheme is extended by using randomized call assignment to enable distributed implementation. Also, we analyze the impact of user mobility and data traffic variability. Further, an enhanced call assignment strategy is developed to exploit the heavy-tailedness of data call size. Last, the study is extended to a multi-service scenario. The overall resource utilization and QoS satisfaction are improved substantially by taking into account the multi-service traffic characteristics, such as the delay-sensitivity of voice traffic, elasticity and heavy-tailedness of data traffic, and rate-adaptiveness of video streaming traffic

    Video streaming over the internet using application layer multicast

    Get PDF
    Multicast is a very important communication paradigm. However, the deployment of multicast at IP layer is very slow, due to development and deployment issues such as ISPs' lack of incentives to update routers and inter-operability among multicast routing protocols. Application Layer Multicast (ALM) is a good alternative, where participating peers organize themselves into a logical overlay network atop the physical links and data is \tunneled" to each other via unicast links. The distinctive feature between IP multicast and ALM is that in ALM, data replication and forwarding functionalities are performed by participating peers (a.k.a. end systems), rather than the routers in Internet Protocol (IP) multicast. This fundamental difference enables ALM to be able to circumvent the development and deployment issues of IP multicast, by exploiting the resources (e.g., CPU cycles, storage, and access bandwidth) at the edge of the network. Nevertheless, it also raises other challenges, as peers are not as stable as routers since they may join and depart the on-going session at will. In this thesis, we address some of the challenges and they are summarized as follows: First, most current P2P or ALM streaming systems are equipped with a non-scalable membership management algorithm, greatly hindering their applicability to large-scale implementations over the Internet: they either rely on a central entity to handle group membership, or simply assume that all group members are visible to each other and flooding is the main mechanism used to disseminate membership-related updates to all participating group members. This implies that they are only applicable to small groups. Second, one of ALM's prominent features, flexility, has not been fully exploited: moving the multicast functionalities from lower layer (IP layer) to higher layer (Application layer) can greatly facilitate the integration of Quality-of-Service (QoS) support. The end-to-end philosophy states that it is better to leave those functionalities to higher layers because the heterogeneity among users' requirements can be handled much better by end users, rather than the network. However, QoS, and in particular, reliability has not been thoroughly addressed in existing ALM schemes. Third, admission control algorithms are essential to the success of any ALM system, due to the fact that in ALM, each peer acts as both a client as well as a server. On the other hand, the heterogeneity among peers, in terms of their computational power, storage capacity, and access bandwidth, further complicates the design of a good admission control. Several contributions are made to address the aforementioned research challenges, and they are outlined as follows: The first contribution is a devised gossip-based membership management algorithm that is able to collect and disseminate membership-related information under high rate of churn, using relatively low communication overheads. The second contribution is a reliability-centric multicast tree construction algorithm that greatly enhance peers' perceived reliability. The third contribution is a QoS-aware tree construction algorithm that accommodates the heterogeneity among peers, such as access bandwidth, network distance, and reliability. The last contribution is the identification of the admission control problem in this overlay video streaming

    ATOM : a distributed system for video retrieval via ATM networks

    Get PDF
    The convergence of high speed networks, powerful personal computer processors and improved storage technology has led to the development of video-on-demand services to the desktop that provide interactive controls and deliver Client-selected video information on a Client-specified schedule. This dissertation presents the design of a video-on-demand system for Asynchronous Transfer Mode (ATM) networks, incorporating an optimised topology for the nodes in the system and an architecture for Quality of Service (QoS). The system is called ATOM which stands for Asynchronous Transfer Mode Objects. Real-time video playback over a network consumes large bandwidth and requires strict bounds on delay and error in order to satisfy the visual and auditory needs of the user. Streamed video is a fundamentally different type of traffic to conventional IP (Internet Protocol) data since files are viewed in real-time, not downloaded and then viewed. This streaming data must arrive at the Client decoder when needed or it loses its interactive value. Characteristics of multimedia data are investigated including the use of compression to reduce the excessive bit rates and storage requirements of digital video. The suitability of MPEG-1 for video-on-demand is presented. Having considered the bandwidth, delay and error requirements of real-time video, the next step in designing the system is to evaluate current models of video-on-demand. The distributed nature of four such models is considered, focusing on how Clients discover Servers and locate videos. This evaluation eliminates a centralized approach in which Servers have no logical or physical connection to any other Servers in the network and also introduces the concept of a selection strategy to find alternative Servers when Servers are fully loaded. During this investigation, it becomes clear that another entity (called a Broker) could provide a central repository for Server information. Clients have logical access to all videos on every Server simply by connecting to a Broker. The ATOM Model for distributed video-on-demand is then presented by way of a diagram of the topology showing the interconnection of Servers, Brokers and Clients; a description of each node in the system; a list of the connectivity rules; a description of the protocol; a description of the Server selection strategy and the protocol if a Broker fails. A sample network is provided with an example of video selection and design issues are raised and solved including how nodes discover each other, a justification for using a mesh topology for the Broker connections, how Connection Admission Control (CAC) is achieved, how customer billing is achieved and how information security is maintained. A calculation of the number of Servers and Brokers required to service a particular number of Clients is presented. The advantages of ATOM are described. The underlying distributed connectivity is abstracted away from the Client. Redundant Server/Broker connections are eliminated and the total number of connections in the system are minimized by the rule stating that Clients and Servers may only connect to one Broker at a time. This reduces the total number of Switched Virtual Circuits (SVCs) which are a performance hindrance in ATM. ATOM can be easily scaled by adding more Servers which increases the total system capacity in terms of storage and bandwidth. In order to transport video satisfactorily, a guaranteed end-to-end Quality of Service architecture must be in place. The design methodology for such an architecture is investigated starting with a review of current QoS architectures in the literature which highlights important definitions including a flow, a service contract and flow management. A flow is a single media source which traverses resource modules between Server and Client. The concept of a flow is important because it enables the identification of the areas requiring consideration when designing a QoS architecture. It is shown that ATOM adheres to the principles motivating the design of a QoS architecture, namely the Integration, Separation and Transparency principles. The issue of mapping human requirements to network QoS parameters is investigated and the action of a QoS framework is introduced, including several possible causes of QoS degradation. The design of the ATOM Quality of Service Architecture (AQOSA) is then presented. AQOSA consists of 11 modules which interact to provide end-to-end QoS guarantees for each stream. Several important results arise from the design. It is shown that intelligent choice of stored videos in respect of peak bandwidth can improve overall system capacity. The concept of disk striping over a disk array is introduced and a Data Placement Strategy is designed which eliminates disk hot spots (i.e. Overuse of some disks whilst others lie idle.) A novel parameter (the B-P Ratio) is presented which can be used by the Server to predict future bursts from each video stream. The use of Traffic Shaping to decrease the load on the network from each stream is presented. Having investigated four algorithms for rewind and fast-forward in the literature, a rewind and fast-forward algorithm is presented. The method produces a significant decrease in bandwidth, and the resultant stream is very constant, reducing the chance that the stream will add to network congestion. The C++ classes of the Server, Broker and Client are described emphasizing the interaction between classes. The use of ATOM in the Virtual Private Network and the multimedia teaching laboratory is considered. Conclusions and recommendations for future work are presented. It is concluded that digital video applications require high bandwidth, low error, low delay networks; a video-on-demand system to support large Client volumes must be distributed, not centralized; control and operation (transport) must be separated; the number of ATM Switched Virtual Circuits (SVCs) must be minimized; the increased connections caused by the Broker mesh is justified by the distributed information gain; a Quality of Service solution must address end-to-end issues. It is recommended that a web front-end for Brokers be developed; the system be tested in a wide area A TM network; the Broker protocol be tested by forcing failure of a Broker and that a proprietary file format for disk striping be implemented
    corecore