42 research outputs found

    QuickCast: Fast and Efficient Inter-Datacenter Transfers using Forwarding Tree Cohorts

    Full text link
    Large inter-datacenter transfers are crucial for cloud service efficiency and are increasingly used by organizations that have dedicated wide area networks between datacenters. A recent work uses multicast forwarding trees to reduce the bandwidth needs and improve completion times of point-to-multipoint transfers. Using a single forwarding tree per transfer, however, leads to poor performance because the slowest receiver dictates the completion time for all receivers. Using multiple forwarding trees per transfer alleviates this concern--the average receiver could finish early; however, if done naively, bandwidth usage would also increase and it is apriori unclear how best to partition receivers, how to construct the multiple trees and how to determine the rate and schedule of flows on these trees. This paper presents QuickCast, a first solution to these problems. Using simulations on real-world network topologies, we see that QuickCast can speed up the average receiver's completion time by as much as 10×10\times while only using 1.04×1.04\times more bandwidth; further, the completion time for all receivers also improves by as much as 1.6×1.6\times faster at high loads.Comment: [Extended Version] Accepted for presentation in IEEE INFOCOM 2018, Honolulu, H

    Videosisällön jakelu Internetin välityksellä

    Get PDF
    Popularity of multimedia streaming services has created great demand for reliable and effective content delivery over unreliable networks, such as the Internet. Currently, a significant part of the Internet data traffic is generated by video streaming applications. The multimedia streaming services are often bandwidth-heavy and are prone to delays or any other varying network conditions. In order to address high demands of real-time multimedia streaming applications, specialized solutions called content delivery networks, have emerged. A content delivery network consists of many geographically distributed replica servers, often deployed close to the end-users. This study consists of two parts and a set of interviews. First part explores development of video technologies and their relation to network bandwidth requirements. Second part proceeds to present the content delivery mechanisms related to video distribution over the Internet. Lastly, the interviews of selected experts was used to gain more relevant and realistic insights for two first parts. The results offer a wide overview of content delivery related findings ranging from streaming techniques to quality of experience. How the video related development progress would affect the future networks and what kind of content delivery models are mostly used in the modern Internet.Multimediapalveluiden suosio on noussut huomattavasti viime vuosina. Videoliikenteen osuus kaikesta tiedonsiirrosta Internetissä on kasvanut merkittävästi. Tämä on luonut suuren tarpeen luotettaville ja tehokkaille videosisällön siirtämisen keinoille epäluotettavien verkkojen yli. Videon suoratoistopalvelut ovat herkkiä verkossa tapahtuville häiriöille ja lisäksi ne vaativat usein verkolta paljon tiedonsiirtokapasiteettia. Ratkaistakseen multimedian reaaliaikaisen tiedonsiirron vaatimukset on kehitetty sisällönsiirtoon erikoistuneita verkkoja (eng. content deliver network - CDN). Nämä sisällönjakoon erikoistuneet verkot ovat fyysisesti hajautettuja kokonaisuuksia. Yleensä ne sijoitetaan mahdollisimman lähelle kohdekäyttäjäryhmää. Tämä työ koostuu kahdesta osasta ja asiantuntijahaastatteluista. Ensimmäinen osa keskittyy taustatietojen keräämiseen, videotekniikoiden kehitykseen ja sen siirtoon liittyviin haasteisiin. Toinen osa esittelee sisällönjaon toiminnot liittyen suoratoistopalveluiden toteutukseen. Haastatteluiden tarkoitus on tuoda esille asiantuntijoiden näkemyksiä kirjallisuuskatsauksen tueksi. Tulokset tarjoavat laajan katsauksen suoratoistopalveluiden sisällönjakotekniikoista, aina videon kehityksestä palvelun käyttökokemukseen saakka. Miten videon kuvanlaadun ja pakkaamisen kehitys voisi vaikuttaa tulevien verkkoteknologioiden kehitykseen Internet-pohjaisessa sisällönjakelussa

    Video Caching, Analytics and Delivery at the Wireless Edge: A Survey and Future Directions

    Get PDF
    Future wireless networks will provide high bandwidth, low-latency, and ultra-reliable Internet connectivity to meet the requirements of different applications, ranging from mobile broadband to the Internet of Things. To this aim, mobile edge caching, computing, and communication (edge-C3) have emerged to bring network resources (i.e., bandwidth, storage, and computing) closer to end users. Edge-C3 allows improving the network resource utilization as well as the quality of experience (QoE) of end users. Recently, several video-oriented mobile applications (e.g., live content sharing, gaming, and augmented reality) have leveraged edge-C3 in diverse scenarios involving video streaming in both the downlink and the uplink. Hence, a large number of recent works have studied the implications of video analysis and streaming through edge-C3. This article presents an in-depth survey on video edge-C3 challenges and state-of-the-art solutions in next-generation wireless and mobile networks. Specifically, it includes: a tutorial on video streaming in mobile networks (e.g., video encoding and adaptive bitrate streaming); an overview of mobile network architectures, enabling technologies, and applications for video edge-C3; video edge computing and analytics in uplink scenarios (e.g., architectures, analytics, and applications); and video edge caching, computing and communication methods in downlink scenarios (e.g., collaborative, popularity-based, and context-aware). A new taxonomy for video edge-C3 is proposed and the major contributions of recent studies are first highlighted and then systematically compared. Finally, several open problems and key challenges for future research are outlined

    Reducing Internet Latency : A Survey of Techniques and their Merit

    Get PDF
    Bob Briscoe, Anna Brunstrom, Andreas Petlund, David Hayes, David Ros, Ing-Jyh Tsang, Stein Gjessing, Gorry Fairhurst, Carsten Griwodz, Michael WelzlPeer reviewedPreprin

    Resource Management in Computing Systems

    Get PDF
    Resource management is an essential building block of any modern computer and communication network. In this thesis, the results of our research in the following two tracks are summarized in four papers. The first track includes three papers and covers modeling, prediction and control for multi-tier computing systems. In the first paper, a NARX-based multi-step-ahead response time predictor for single server queuing systems is presented which can be applied to CPU-constrained computing systems. The second paper introduces a NARX-based multi-step-ahead query response time predictor for database servers. Both mentioned predictors can predict the dynamics of response times in the whole operation range particularly in high load scenarios without changes having to be applied to the current protocols and operating systems. In the third paper, queuing theory is used to model the dynamics of a database server. Several heuristics are presented to tune the parameters of the proposed model to the measured data from the database. Furthermore, an admission controller is presented, and its parameters are tuned to control the response time of queries which are sent to the database to stay below a predefined reference value.The second track includes one paper, covering a problem formulation and optimal solution for a content replication problem in Telecom operator's content delivery networks (Telco-CDNs). The problem is formulated in the form of an integer programming problem trying to minimize the communication delay and cost according to several constraints such as limited content replication budget, limited storage size and limited downlink bandwidth of each regional content server. The solution of this problem is a performance bound for any distributed content replication algorithm which addresses the same problem

    Low Latency Low Loss Media Delivery Utilizing In-Network Packet Wash

    Get PDF
    This paper presents new techniques and mechanisms for carrying streams of layered video using Scalable Video Coding (SVC) from servers to clients, utilizing the Packet Wash mechanism which is part of the Big Packet Protocol (BPP). BPP was designed to handle the transfer of packets for high-bandwidth, low-latency applications, aiming to overcome a number of issues current networks have with high precision services. One of the most important advantages of BPP is that it allows the dynamic adaption of packets during transmission. BPP uses Packet Wash to reduce the payload, and the size of a packet by eliminating specific chunks. For video, this means cutting out specific segments of the transferred video, rather than dropping packets, as happens with UDP based transmission, or retrying the transmission of packets, as happens with TCP. The chunk elimination approach is well matched with SVC video, and these techniques and mechanisms are utilized and presented. An evaluation of the performance is provided, plus a comparison of using UDP or TCP, which are the other common approaches for carrying media over IP. Our main contributions are the mapping of SVC video into BPP packets to provide low latency, low loss delivery, which provides better QoE performance than either UDP or TCP, when using those techniques and mechanisms. This approach has proved to be an effective way to enhance the performance of video streaming applications, by obtaining continuous delivery, while maintaining guaranteed quality at the receiver. In this work we have successfully used an H264 SVC encoded video for layered video transmission utilizing BPP, and can demonstrate video delivery with low latency and low loss in limited bandwidth environments

    Design and Evaluation of Distributed Algorithms for Placement of Network Services

    Get PDF
    Network services play an important role in the Internet today. They serve as data caches for websites, servers for multiplayer games and relay nodes for Voice over IP: VoIP) conversations. While much research has focused on the design of such services, little attention has been focused on their actual placement. This placement can impact the quality of the service, especially if low latency is a requirement. These services can be located on nodes in the network itself, making these nodes supernodes. Typically supernodes are selected in either a proprietary or ad hoc fashion, where a study of this placement is either unavailable or unnecessary. Previous research dealt with the only pieces of the problem, such as finding the location of caches for a static topology, or selecting better routes for relays in VoIP. However, a comprehensive solution is needed for dynamic applications such as multiplayer games or P2P VoIP services. These applications adapt quickly and need solutions based on the immediate demands of the network. In this thesis we develop distributed algorithms to assign nodes the role of a supernode. This research first builds off of prior work by modifying an existing assignment algorithm and implementing it in a distributed system called Supernode Placement in Overlay Topologies: SPOT). New algorithms are developed to assign nodes the supernode role. These algorithms are then evaluated in SPOT to demonstrate improved SN assignment and scalability. Through a series of simulation, emulation, and experimentation insight is gained into the critical issues associated with allocating resources to perform the role of supernodes. Our contributions include distributed algorithms to assign nodes as supernodes, an open source fully functional distributed supernode allocation system, an evaluation of the system in diverse networking environments, and a simulator called SPOTsim which demonstrates the scalability of the system to thousands of nodes. An example of an application deploying such a system is also presented along with the empirical results
    corecore