24 research outputs found

    ISP-friendly Peer-assisted On-demand Streaming of Long Duration Content in BBC iPlayer

    Full text link
    In search of scalable solutions, CDNs are exploring P2P support. However, the benefits of peer assistance can be limited by various obstacle factors such as ISP friendliness - requiring peers to be within the same ISP, bitrate stratification - the need to match peers with others needing similar bitrate, and partial participation - some peers choosing not to redistribute content. This work relates potential gains from peer assistance to the average number of users in a swarm, its capacity, and empirically studies the effects of these obstacle factors at scale, using a month-long trace of over 2 million users in London accessing BBC shows online. Results indicate that even when P2P swarms are localised within ISPs, up to 88% of traffic can be saved. Surprisingly, bitrate stratification results in 2 large sub-swarms and does not significantly affect savings. However, partial participation, and the need for a minimum swarm size do affect gains. We investigate improvements to gain from increasing content availability through two well-studied techniques: content bundling - combining multiple items to increase availability, and historical caching of previously watched items. Bundling proves ineffective as increased server traffic from larger bundles outweighs benefits of availability, but simple caching can considerably boost traffic gains from peer assistance.Comment: In Proceedings of IEEE INFOCOM 201

    User-Centric Quality of Service Provisioning in IP Networks

    Get PDF
    The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.France Teleco

    Mobile Peer-to-Peer Assisted Coded Streaming

    Get PDF

    Net Neutrality

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. Chris Marsden maneuvers through the hype articulated by Netwrok Neutrality advocates and opponents. He offers a clear-headed analysis of the high stakes in this debate about the Internet's future, and fearlessly refutes the misinformation and misconceptions that about' Professor Rob Freiden, Penn State University Net Neutrality is a very heated and contested policy principle regarding access for content providers to the Internet end-user, and potential discrimination in that access where the end-user's ISP (or another ISP) blocks that access in part or whole. The suggestion has been that the problem can be resolved by either introducing greater competition, or closely policing conditions for vertically integrated service, such as VOIP. However, that is not the whole story, and ISPs as a whole have incentives to discriminate between content for matters such as network management of spam, to secure and maintain customer experience at current levels, and for economic benefit from new Quality of Service standards. This includes offering a ‘priority lane' on the network for premium content types such as video and voice service. The author considers market developments and policy responses in Europe and the United States, draws conclusions and proposes regulatory recommendations

    Net Neutrality

    Get PDF
    This book is available as open access through the Bloomsbury Open Access programme and is available on www.bloomsburycollections.com. Chris Marsden maneuvers through the hype articulated by Netwrok Neutrality advocates and opponents. He offers a clear-headed analysis of the high stakes in this debate about the Internet's future, and fearlessly refutes the misinformation and misconceptions that about' Professor Rob Freiden, Penn State University Net Neutrality is a very heated and contested policy principle regarding access for content providers to the Internet end-user, and potential discrimination in that access where the end-user's ISP (or another ISP) blocks that access in part or whole. The suggestion has been that the problem can be resolved by either introducing greater competition, or closely policing conditions for vertically integrated service, such as VOIP. However, that is not the whole story, and ISPs as a whole have incentives to discriminate between content for matters such as network management of spam, to secure and maintain customer experience at current levels, and for economic benefit from new Quality of Service standards. This includes offering a ‘priority lane' on the network for premium content types such as video and voice service. The author considers market developments and policy responses in Europe and the United States, draws conclusions and proposes regulatory recommendations

    A framework for the dynamic management of Peer-to-Peer overlays

    Get PDF
    Peer-to-Peer (P2P) applications have been associated with inefficient operation, interference with other network services and large operational costs for network providers. This thesis presents a framework which can help ISPs address these issues by means of intelligent management of peer behaviour. The proposed approach involves limited control of P2P overlays without interfering with the fundamental characteristics of peer autonomy and decentralised operation. At the core of the management framework lays the Active Virtual Peer (AVP). Essentially intelligent peers operated by the network providers, the AVPs interact with the overlay from within, minimising redundant or inefficient traffic, enhancing overlay stability and facilitating the efficient and balanced use of available peer and network resources. They offer an “insider‟s” view of the overlay and permit the management of P2P functions in a compatible and non-intrusive manner. AVPs can support multiple P2P protocols and coordinate to perform functions collectively. To account for the multi-faceted nature of P2P applications and allow the incorporation of modern techniques and protocols as they appear, the framework is based on a modular architecture. Core modules for overlay control and transit traffic minimisation are presented. Towards the latter, a number of suitable P2P content caching strategies are proposed. Using a purpose-built P2P network simulator and small-scale experiments, it is demonstrated that the introduction of AVPs inside the network can significantly reduce inter-AS traffic, minimise costly multi-hop flows, increase overlay stability and load-balancing and offer improved peer transfer performance

    Re-thinking crisis in the digital economy: a contemporary case study of the phonographic industries in Ireland.

    Get PDF
    Many commentators and reports popularly place the record industry in an increasing state of crisis since the advent of digital copying and distribution. This thesis addresses how the interplay of technological, economic, legal and policy factors, particularly the copyright strand of intellectual property law, shape the form and extent of the Internet’s disruptive potential in the music industry. It points to significant continuities regarding the music industry in an environment where it is often regarded as experiencing turbulence and change, and in doing so the thesis challenges the form and extent of the crisis the music industry currently claims to be battling. The thesis questions the impact the internet is having on the power or role of major music companies, their revenue streams, their relationships with other actors in the music industry chain and their final consumers. The thesis further questions the extent to which the internet has evolved to realise its disruptive potential on the organisation and structure of the record industry by democratising the channels of distribution. It also serves to illuminate the impact of the internet on the role of more traditional intermediaries, particularly radio, in the circulation and promotion of music in the contemporary era. For its primary research material, the thesis draws on a series of thirty-nine interviews conducted with record industry management and personnel as well as key informants from the fields of music publishing, artist management, music retailing, radio, the music press, related industry bodies and policy fields, and other key commentators

    QoE management of HTTP adaptive streaming services

    Get PDF

    Hierarchical network topographical routing

    Get PDF
    Within the last 10 years the content consumption model that underlies many of the assumptions about traffic aggregation within the Internet has changed; the previous short burst transfer followed by longer periods of inactivity that allowed for statistical aggregation of traffic has been increasingly replaced by continuous data transfer models. Approaching this issue from a clean slate perspective; this work looks at the design of a network routing structure and supporting protocols for assisting in the delivery of large scale content services. Rather than approaching a content support model through existing IP models the work takes a fresh look at Internet routing through a hierarchical model in order to highlight the benefits that can be gained with a new structural Internet or through similar modifications to the existing IP model. The work is divided into three major sections: investigating the existing UK based Internet structure as compared to the traditional Autonomous System (AS) Internet structural model; a localised hierarchical network topographical routing model; and intelligent distributed localised service models. The work begins by looking at the United Kingdom (UK) Internet structure as an example of a current generation technical and economic model with shared access to the last mile connectivity and a large scale wholesale network between Internet Service Providers (ISPs) and the end user. This model combined with the Internet Protocol (IP) address allocation and transparency of the wholesale network results in an enforced inefficiency within the overall network restricting the ability of ISPs to collaborate. From this model a core / edge separation hierarchical virtual tree based routing protocol based on the physical network topography (layers 2 and 3) is developed to remove this enforced inefficiency by allowing direct management and control at the lowest levels of the network. This model acts as the base layer for further distributed intelligent services such as management and content delivery to enable both ISPs and third parties to actively collaborate and provide content from the most efficient source

    Profiling Large-scale Live Video Streaming and Distributed Applications

    Get PDF
    PhDToday, distributed applications run at data centre and Internet scales, from intensive data analysis, such as MapReduce; to the dynamic demands of a worldwide audience, such as YouTube. The network is essential to these applications at both scales. To provide adequate support, we must understand the full requirements of the applications, which are revealed by the workloads. In this thesis, we study distributed system applications at different scales to enrich this understanding. Large-scale Internet applications have been studied for years, such as social networking service (SNS), video on demand (VoD), and content delivery networks (CDN). An emerging type of video broadcasting on the Internet featuring crowdsourced live video streaming has garnered attention allowing platforms such as Twitch to attract over 1 million concurrent users globally. To better understand Twitch, we collected real-time popularity data combined with metadata about the contents and found the broadcasters rather than the content drives its popularity. Unlike YouTube and Netflix where content can be cached, video streaming on Twitch is generated instantly and needs to be delivered to users immediately to enable real-time interaction. Thus, we performed a large-scale measurement of Twitchs content location revealing the global footprint of its infrastructure as well as discovering the dynamic stream hosting and client redirection strategies that helped Twitch serve millions of users at scale. We next consider applications that run inside the data centre. Distributed computing applications heavily rely on the network due to data transmission needs and the scheduling of resources and tasks. One successful application, called Hadoop, has been widely deployed for Big Data processing. However, little work has been devoted to understanding its network. We found the Hadoop behaviour is limited by hardware resources and processing jobs presented. Thus, after characterising the Hadoop traffic on our testbed with a set of benchmark jobs, we built a simulator to reproduce Hadoops job traffic With the simulator, users can investigate the connections between Hadoop traffic and network performance without additional hardware cost. Different network components can be added to investigate the performance, such as network topologies, queue policies, and transport layer protocols. In this thesis, we extended the knowledge of networking by investigated two widelyused applications in the data centre and at Internet scale. We (i)studied the most popular live video streaming platform Twitch as a new type of Internet-scale distributed application revealing that broadcaster factors drive the popularity of such platform, and we (ii)discovered the footprint of Twitch streaming infrastructure and the dynamic stream hosting and client redirection strategies to provide an in-depth example of video streaming delivery occurring at the Internet scale, also we (iii)investigated the traffic generated by a distributed application by characterising the traffic of Hadoop under various parameters, (iv)with such knowledge, we built a simulation tool so users can efficiently investigate the performance of different network components under distributed applicationQueen Mary University of Londo
    corecore