70 research outputs found

    A Comprehensive Analysis of Swarming-based Live Streaming to Leverage Client Heterogeneity

    Full text link
    Due to missing IP multicast support on an Internet scale, over-the-top media streams are delivered with the help of overlays as used by content delivery networks and their peer-to-peer (P2P) extensions. In this context, mesh/pull-based swarming plays an important role either as pure streaming approach or in combination with tree/push mechanisms. However, the impact of realistic client populations with heterogeneous resources is not yet fully understood. In this technical report, we contribute to closing this gap by mathematically analysing the most basic scheduling mechanisms latest deadline first (LDF) and earliest deadline first (EDF) in a continuous time Markov chain framework and combining them into a simple, yet powerful, mixed strategy to leverage inherent differences in client resources. The main contributions are twofold: (1) a mathematical framework for swarming on random graphs is proposed with a focus on LDF and EDF strategies in heterogeneous scenarios; (2) a mixed strategy, named SchedMix, is proposed that leverages peer heterogeneity. The proposed strategy, SchedMix is shown to outperform the other two strategies using different abstractions: a mean-field theoretic analysis of buffer probabilities, simulations of a stochastic model on random graphs, and a full-stack implementation of a P2P streaming system.Comment: Technical report and supplementary material to http://ieeexplore.ieee.org/document/7497234

    Network coding meets multimedia: a review

    Get PDF
    While every network node only relays messages in a traditional communication system, the recent network coding (NC) paradigm proposes to implement simple in-network processing with packet combinations in the nodes. NC extends the concept of "encoding" a message beyond source coding (for compression) and channel coding (for protection against errors and losses). It has been shown to increase network throughput compared to traditional networks implementation, to reduce delay and to provide robustness to transmission errors and network dynamics. These features are so appealing for multimedia applications that they have spurred a large research effort towards the development of multimedia-specific NC techniques. This paper reviews the recent work in NC for multimedia applications and focuses on the techniques that fill the gap between NC theory and practical applications. It outlines the benefits of NC and presents the open challenges in this area. The paper initially focuses on multimedia-specific aspects of network coding, in particular delay, in-network error control, and mediaspecific error control. These aspects permit to handle varying network conditions as well as client heterogeneity, which are critical to the design and deployment of multimedia systems. After introducing these general concepts, the paper reviews in detail two applications that lend themselves naturally to NC via the cooperation and broadcast models, namely peer-to-peer multimedia streaming and wireless networkin

    Efficient Content Distribution With Managed Swarms

    Full text link
    Content distribution has become increasingly important as people have become more reliant on Internet services to provide large multimedia content. Efficiently distributing content is a complex and difficult problem: large content libraries are often distributed across many physical hosts, and each host has its own bandwidth and storage constraints. Peer-to-peer and peer-assisted download systems further complicate content distribution. By contributing their own bandwidth, end users can improve overall performance and reduce load on servers, but end users have their own motivations and incentives that are not necessarily aligned with those of content distributors. Consequently, existing content distributors either opt to serve content exclusively from hosts under their direct control, and thus neglect the large pool of resources that end users can offer, or they allow end users to contribute bandwidth at the expense of sacrificing complete control over available resources. This thesis introduces a new approach to content distribution that achieves high performance for distributing bulk content, based on managed swarms. Managed swarms efficiently allocate bandwidth from origin servers, in-network caches, and end users to achieve system-wide performance objectives. Managed swarming systems are characterized by the presence of a logically centralized coordinator that maintains a global view of the system and directs hosts toward an efficient use of bandwidth. The coordinator allocates bandwidth from each host based on empirical measurements of swarm behavior combined with a new model of swarm dynamics. The new model enables the coordinator to predict how swarms will respond to changes in bandwidth based on past measurements of their performance. In this thesis, we focus on the global objective of maximizing download bandwidth across end users in the system. To that end, we introduce two algorithms that the coordinator can use to compute efficient allocations of bandwidth for each host that result in high download speeds for clients. We have implemented a scalable coordinator that uses these algorithms to maximize system-wide aggregate bandwidth. The coordinator actively measures swarm dynamics and uses the data to calculate, for each host, a bandwidth allocation among the swarms competing for the host's bandwidth. Extensive simulations and a live deployment show that managed swarms significantly outperform centralized distribution services as well as completely decentralized peer-to-peer systems

    A Framework For Efficient Data Distribution In Peer-to-peer Networks.

    Get PDF
    Peer to Peer (P2P) models are based on user altruism, wherein a user shares its content with other users in the pool and it also has an interest in the content of the other nodes. Most P2P systems in their current form are not fair in terms of the content served by a peer and the service obtained from swarm. Most systems suffer from free rider\u27s problem where many high uplink capacity peers contribute much more than they should while many others get a free ride for downloading the content. This leaves high capacity nodes with very little or no motivation to contribute. Many times such resourceful nodes exit the swarm or don\u27t even participate. The whole scenario is unfavorable and disappointing for P2P networks in general, where participation is a must and a very important feature. As the number of users increases in the swarm, the swarm becomes robust and scalable. Other important issues in the present day P2P system are below optimal Quality of Service (QoS) in terms of download time, end-to-end latency and jitter rate, uplink utilization, excessive cross ISP traffic, security and cheating threats etc. These current day problems in P2P networks serve as a motivation for present work. To this end, we present an efficient data distribution framework in Peer-to-Peer (P2P) networks for media streaming and file sharing domain. The experiments with our model, an alliance based peering scheme for media streaming, show that such a scheme distributes data to the swarm members in a near-optimal way. Alliances are small groups of nodes that share data and other vital information for symbiotic association. We show that alliance formation is a loosely coupled and an effective way to organize the peers and our model maps to a small world network, which form efficient overlay structures and are robust to network perturbations such as churn. We present a comparative simulation based study of our model with CoolStreaming/DONet (a popular model) and present a quantitative performance evaluation. Simulation results show that our model scales well under varying workloads and conditions, delivers near optimal levels of QoS, reduces cross ISP traffic considerably and for most cases, performs at par or even better than Cool-Streaming/DONet. In the next phase of our work, we focussed on BitTorrent P2P model as it the most widely used file sharing protocol. Many studies in academia and industry have shown that though BitTorrent scales very well but is far from optimal in terms of fairness to end users, download time and uplink utilization. Furthermore, random peering and data distribution in such model lead to suboptimal performance. Lately, new breed of BitTorrent clients like BitTyrant have shown successful strategic attacks against BitTorrent. Strategic peers configure the BitTorrent client software such that for very less or no contribution, they can obtain good download speeds. Such strategic nodes exploit the altruism in the swarm and consume resources at the expense of other honest nodes and create an unfair swarm. More unfairness is generated in the swarm with the presence of heterogeneous bandwidth nodes. We investigate and propose a new token-based anti-strategic policy that could be used in BitTorrent to minimize the free-riding by strategic clients. We also proposed other policies against strategic attacks that include using a smart tracker that denies the request of strategic clients for peer listmultiple times, and black listing the non-behaving nodes that do not follow the protocol policies. These policies help to stop the strategic behavior of peers to a large extent and improve overall system performance. We also quantify and validate the benefits of using bandwidth peer matching policy. Our simulations results show that with the above proposed changes, uplink utilization and mean download time in BitTorrent network improves considerably. It leaves strategic clients with little or no incentive to behave greedily. This reduces free riding and creates fairer swarm with very little computational overhead. Finally, we show that our model is self healing model where user behavior changes from selfish to altruistic in the presence of the aforementioned policies

    Incentive-driven QoS in peer-to-peer overlays

    Get PDF
    A well known problem in peer-to-peer overlays is that no single entity has control over the software, hardware and configuration of peers. Thus, each peer can selfishly adapt its behaviour to maximise its benefit from the overlay. This thesis is concerned with the modelling and design of incentive mechanisms for QoS-overlays: resource allocation protocols that provide strategic peers with participation incentives, while at the same time optimising the performance of the peer-to-peer distribution overlay. The contributions of this thesis are as follows. First, we present PledgeRoute, a novel contribution accounting system that can be used, along with a set of reciprocity policies, as an incentive mechanism to encourage peers to contribute resources even when users are not actively consuming overlay services. This mechanism uses a decentralised credit network, is resilient to sybil attacks, and allows peers to achieve time and space deferred contribution reciprocity. Then, we present a novel, QoS-aware resource allocation model based on Vickrey auctions that uses PledgeRoute as a substrate. It acts as an incentive mechanism by providing efficient overlay construction, while at the same time allocating increasing service quality to those peers that contribute more to the network. The model is then applied to lagsensitive chunk swarming, and some of its properties are explored for different peer delay distributions. When considering QoS overlays deployed over the best-effort Internet, the quality received by a client cannot be adjudicated completely to either its serving peer or the intervening network between them. By drawing parallels between this situation and well-known hidden action situations in microeconomics, we propose a novel scheme to ensure adherence to advertised QoS levels. We then apply it to delay-sensitive chunk distribution overlays and present the optimal contract payments required, along with a method for QoS contract enforcement through reciprocative strategies. We also present a probabilistic model for application-layer delay as a function of the prevailing network conditions. Finally, we address the incentives of managed overlays, and the prediction of their behaviour. We propose two novel models of multihoming managed overlay incentives in which overlays can freely allocate their traffic flows between different ISPs. One is obtained by optimising an overlay utility function with desired properties, while the other is designed for data-driven least-squares fitting of the cross elasticity of demand. This last model is then used to solve for ISP profit maximisation

    Connectivity and Data Transmission over Wireless Mobile Systems

    Get PDF
    We live in a world where wireless connectivity is pervasive and becomes ubiquitous. Numerous devices with varying capabilities and multiple interfaces are surrounding us. Most home users use Wi-Fi routers, whereas a large portion of human inhabited land is covered by cellular networks. As the number of these devices, and the services they provide, increase, our needs in bandwidth and interoperability are also augmented. Although deploying additional infrastructure and future protocols may alleviate these problems, efficient use of the available resources is important. We are interested in the problem of identifying the properties of a system able to operate using multiple interfaces, take advantage of user locations, identify the users that should be involved in the routing, and setup a mechanism for information dissemination. The challenges we need to overcome arise from network complexity and heterogeneousness, as well as the fact that they have no single owner or manager. In this thesis I focus on two cases, namely that of utilizing "in-situ" WiFi Access Points to enhance the connections of mobile users, and that of establishing "Virtual Access Points" in locations where there is no fixed roadside equipment available. Both environments have attracted interest for numerous related works. In the first case the main effort is to take advantage of the available bandwidth, while in the second to provide delay tolerant connectivity, possibly in the face of disasters. Our main contribution is to utilize a database to store user locations in the system, and to provide ways to use that information to improve system effectiveness. This feature allows our system to remain effective in specific scenarios and tests, where other approaches fail

    Data management in dynamic distributed computing environments

    Get PDF
    Data management in parallel computing systems is a broad and increasingly important research topic. As network speeds have surged, so too has the movement to transition storage and computation loads to wide-area network resources. The Grid, the Cloud, and Desktop Grids all represent different aspects of this movement towards highly-scalable, distributed, and utility computing. This dissertation contends that a peer-to-peer (P2P) networking paradigm is a natural match for data sharing within and between these heterogeneous network architectures. Peer-to-peer methods such as dynamic discovery, fault-tolerance, scalability, and ad-hoc security infrastructures provide excellent mappings for many of the requirements in today’s distributed computing environment. In recent years, volunteer Desktop Grids have seen a growth in data throughput as application areas expand and new problem sets emerge. These increasing data needs require storage networks that can scale to meet future demand while also facilitating expansion into new data-intensive research areas. Current practices are to mirror data from centralized locations, a technique that is not practical for growing data sets, dynamic projects, or data-intensive applications. The fusion of Desktop and Service Grids provides an ideal use-case to research peer-to-peer data distribution strategies in a hybrid environment. Desktop Grids have a data management gap, while integration with Service Grids raises new challenges with regard to cross-platform design. The work undertaken here is two-fold: first it explores how P2P techniques can be leveraged to meet the data management needs of Desktop Grids, and second, it shows how the same distribution paradigm can provide migration paths for Service Grid data. The result of this research is a Peer-to-Peer Architecture for Data-Intensive Cycle Sharing (ADICS) that is capable not only of distributing volunteer computing data, but also of providing a transitional platform and storage space for migrating Service Grid jobs to Desktop Grid environments

    Service Quality Assessment for Cloud-based Distributed Data Services

    Full text link
    The issue of less-than-100% reliability and trust-worthiness of third-party controlled cloud components (e.g., IaaS and SaaS components from different vendors) may lead to laxity in the QoS guarantees offered by a service-support system S to various applications. An example of S is a replicated data service to handle customer queries with fault-tolerance and performance goals. QoS laxity (i.e., SLA violations) may be inadvertent: say, due to the inability of system designers to model the impact of sub-system behaviors onto a deliverable QoS. Sometimes, QoS laxity may even be intentional: say, to reap revenue-oriented benefits by cheating on resource allocations and/or excessive statistical-sharing of system resources (e.g., VM cycles, number of servers). Our goal is to assess how well the internal mechanisms of S are geared to offer a required level of service to the applications. We use computational models of S to determine the optimal feasible resource schedules and verify how close is the actual system behavior to a model-computed \u27gold-standard\u27. Our QoS assessment methods allow comparing different service vendors (possibly with different business policies) in terms of canonical properties: such as elasticity, linearity, isolation, and fairness (analogical to a comparative rating of restaurants). Case studies of cloud-based distributed applications are described to illustrate our QoS assessment methods. Specific systems studied in the thesis are: i) replicated data services where the servers may be hosted on multiple data-centers for fault-tolerance and performance reasons; and ii) content delivery networks to geographically distributed clients where the content data caches may reside on different data-centers. The methods studied in the thesis are useful in various contexts of QoS management and self-configurations in large-scale cloud-based distributed systems that are inherently complex due to size, diversity, and environment dynamicity

    OpenCache:a content delivery platform for the modern internet

    Get PDF
    Since its inception, the World Wide Web has revolutionised the way we share information, keep in touch with each other and consume content. In the latter case, it is now used by thousands of simultaneous users to consume video, surpassing physical media as the primary means of distribution. With the rise of on-demand services and more recently, high-definition media, this popularity has not waned. To support this consumption, the underlying infrastructure has been forced to evolve at a rapid pace. This includes the technology and mechanisms to facilitate the transmission of video, which are now offered at varying levels of quality and resolution. Content delivery networks are often deployed in order to scale the distribution provision. These vary in nature and design; from third-party providers running entirely as a service to others, to in-house solutions owned by the content service providers themselves. However, recent innovations in networking and virtualisation, namely Software Defined Networking and Network Function Virtualisation, have paved the way for new content delivery infrastructure designs. In this thesis, we discuss the motivation behind OpenCache, a next-generation content delivery platform. We examine how we can leverage these emerging technologies to provide a more flexible and scalable solution to content delivery. This includes analysing the feasibility of novel redirection techniques, and how these compare to existing means. We also investigate the creation of a unified interface from which a platform can be precisely controlled, allowing new applications to be created that operate in harmony with the infrastructure provision. Developments in distributed virtualisation platforms also enables functionality to be spread throughout a network, influencing the design of OpenCache. Through a prototype implementation, we evaluate each of these facets in a number of different scenarios, made possible through deployment on large-scale testbeds

    Leveraging The Multi-Disciplinary Approach to Countering Organised Crime

    Get PDF
    This paper provides a high-level evaluation of organised crime and the threats arising from online organised crime, within a multi-disciplinary perspective. It draws on a range of academic, industry and other materials to distinguish the key characteristics of online organised crime and to identify some of the multi-disciplinary resources which are available to counter it. Real-life case studies and other examples, together with the Tables in the Appendices, are used to demonstrate how contemporary online organised crime is profit-driven and has a strong commercial focus. The paper is accompanied by a series of Appendices and Glossaries and a comprehensive Reference list (provided within a separate document to facilitate crossreferencing with this paper) that includes suggestions for further reading and research. Section Three begins by demonstrating how there are many possible approaches which can be taken towards organised crime, which may at first appear confusing, contradictory or overwhelming. It mentions that law enforcement is adopting a multidisciplinary approach and working in partnership with other sectors, including the business sector, to counter the problem. Next, the paper attempts to separate the ‘fact from the fiction’ of organised crime, highlighting the pitfalls of relying on any single source (for instance, media reports or statistics) when analysing the subject. It identifies reliable sources for information about organised crime (for instance, the United Nations Convention on Transnational Organised Crime and several established, academic sources) and aggregates some of the key organised crime characteristics from the sources within Tables 1 to 6 in Appendix A. Having established that, despite initial impressions, it is possible to obtain a consensus view about theoretical organised crime characteristics within carefully-defined parameters, the project aligns the theoretical criteria against real-life online organised crime case studies. This establishes that, although there are many similarities between terrestrial and online organised crime groups (OOCGs), the online groups also display characteristics which are unique to them, for instance a high dependence on the use of the Internet and transnational strategies. With regard to online involvement by ‘traditional’ organised crime groups such as the Mafia, the paper highlights that, although there is some indication in both the theoretical literature and the case studies that traditional organised crime groups are targeting the Internet, the evidence in the case studies suggests that involvement of traditional organised crime groups is not a dominant feature at the moment. In Section Four, the paper assumes a non-technical IS perspective and describes some of the vulnerable elements within information technology, especially within the structures of the Internet and the Web, which all offenders, including OOCGs, are exploiting. It explains some of the reasons why these vulnerabilities exist and why they are attractive to offenders. In particular, it highlights the serious threat which crimeware, which is often sold and distributed by OOCGs, poses to the Web environment. In Section Five, the paper shifts to a business perspective, emphasising the importance of understanding online organised crime business models and mentioning the work of particular authors whose work in this field adopts a multi-disciplinary approach. The paper then uses Morphological Analysis (MA) to demonstrate how a multidisciplinary approach to strategic analysis can utilise the skills and experience of IS/business professionals, as well as assisting them to manage the threat which OOCGs may pose to their business. The paper concludes with the observation from academic and industry sources that directly targeting the profit-making aspects of an online organised crime business may be one of the most effective responses to the problem
    • …
    corecore