2 research outputs found

    Optimising Networks For Ultra-High Definition Video

    Get PDF
    The increase in real-time ultra-high definition video services is a challenging issue for current network infrastructures. The high bitrate traffic generated by ultra-high definition content reduces the effectiveness of current live video distribution systems. Transcoders and application layer multicasting (ALM) can reduce traffic in a video delivery system, but they are limited due to the static nature of their implementations. To overcome the restrictions of current static video delivery systems, an OpenFlow based migration system is proposed. This system enables an almost seamless migration of a transcoder or ALM node, while delivering real-time ultra-high definition content. Further to this, a novel heuristic algorithm is presented to optimise control of the migration events and destination. The combination of the migration system and heuristic algorithm provides an improved video delivery system, capable of migrating resources during operation with minimal disruption to clients. With the rise in popularity of consumer based live streaming, it is necessary to develop and improve architectures that can support these new types of applications. Current architectures introduce a large delay to video streams, which presents issues for certain applications. In order to overcome this, an improved infrastructure for delivering real-time streams is also presented. The proposed system uses OpenFlow within a content delivery network (CDN) architecture, in order to improve several aspects of current CDNs. Aside from the reduction in stream delay, other improvements include switch level multicasting to reduce duplicate traffic and smart load balancing for server resources. Furthermore, a novel max-flow algorithm is also presented. This algorithm aims to optimise traffic within a system such as the proposed OpenFlow CDN, with the focus on distributing traffic across the network, in order to reduce the probability of blocking

    Content retrieval using cloud-based DNS

    No full text
    Cloud-computing systems are rapidly gaining momentum, providing flexible alternatives to many services. We study the Domain Name System (DNS) service, used to convert host names to IP addresses, which has historically been provided by a client\u27s Internet Service Provider (ISP). With the advent of cloud-based DNS providers such as Google and OpenDNS, clients are increasingly using these DNS systems for URL and other name resolution. Performance degradation with cloud-based DNS has been reported, especially when accessing content hosted on highly distributed CDNs like Akamai. In this work, we investigate this problem in depth using Akamai as the content provider and Google DNS as the cloud-based DNS system. We demonstrate that the problem is rooted in the disparity between the number and location of servers of the two providers, and develop a new technique for geolocating data centers of cloud providers. Additionally, we explore the design space of methods for cloud-based DNS systems to be effective. Client-side, cloud-side, and hybrid approaches are presented and compared, with the goal of achieving the best client-perceived performance. Our work yields valuable insight into Akamai\u27s DNS system, revealing previously unknown features
    corecore