25 research outputs found

    APRE: A Replication Method for Unstructured P2P Networks

    Get PDF
    We present APRE, a replication method for structureless Peer-to-Peer overlays. The goal of our method is to achieve real-time replication of even the most sparsely located content relative to demand. APRE adaptively expands or contracts the replica set of an object in order to improve the sharing process and achieve a low load distribution among the providers. To achieve that, it utilizes search knowledge to identify possible replication targets inside query-intensive areas of the overlay. We present detailed simulation results where APRE exhibits both efficiency and robustness relative to the number of requesters and the respective request rates. The scheme proves particularly useful in the event of flash crowds, managing to quickly adapt to sudden surges in load

    A cost-efficient QoS-aware analytical model of future software content delivery networks

    Get PDF
    Freelance, part-time, work-at-home, and other flexible jobs are changing the concept of workplace, and bringing information and content exchange problems to companies. Geographically spread corporations may use remote distribution of software and data to attend employees' demands, by exploiting emerging delivery technologies. In this context, cost-efficient software distribution is crucial to allow business evolution and make IT infrastructures more agile. On the other hand, container based virtualization technology is shaping the new trends of software deployment and infrastructure design. We envision current and future enterprise IT management trends evolving towards container based software delivery over Hybrid CDNs. This paper presents a novel cost-efficient QoS aware analytical model and a Hybrid CDN-P2P architecture for enterprise software distribution. The model would allow delivery cost minimization for a wide range of companies, from big multinationals to SMEs, using CDN-P2P distribution under various industrial hypothetical scenarios. Model constraints guarantee acceptable deployment times and keep interchanged content amounts below the bandwidth and storage network limits in our scenarios. Indeed, key model parameters account for network bandwidth, storage limits and rental prices, which are empirically determined from their offered values by the commercial delivery networks KeyCDN, MaxCDN, CDN77 and BunnyCDN. This preliminary study indicates that MaxCDN offers the best cost-QoS trade-off. The model is implemented in the network simulation tool PeerSim, and then applied to diverse testing scenarios by varying company types, number and profile (either, technical or administrative) of employees and the number and size of content requests. Hybrid simulation results show overall economic savings between 5\% and 20\%, compared to just hiring resources from a commercial CDN, while guaranteeing satisfactory QoS levels in terms of deployment times and number of served requests.This work was partially supported by Generalitat de Catalunya under the SGR Program (2017-SGR-962) and the RIS3CAT DRAC Project (001-P-001723). We have also received funding from Ministry of Science and Innovation (Spain) under the project EQC2019-005653-P.Peer ReviewedPostprint (author's final draft

    Distributing streaming media content using cooperative networking

    Get PDF

    SEARCH, REPLICATION AND GROUPING FOR UNSTRUCTURED P2P NETWORKS

    Get PDF
    In my dissertation, I present a suite of protocols that assist in efficient content location and distribution in unstructured Peer-to-Peer overlays. The basis of these schemes is their ability to learn from past interactions, increasing their performance with time. Peer-to-Peer (P2P) networks are gaining increasing attention from both the scientific and the large Internet user community. Popular applications utilizing this new technology offer many attractive features to a growing number of users. P2P systems have two basic functions: Content search and dissemination. Search (or lookup) protocols define how participants locate remotely maintained resources. In data dissemination, users transmit or receive content from single or multiple sites in the network. P2P applications traditionally operate under purely decentralized and highly dynamic environments. Unstructured systems represent a particularly interesting class of P2P networks. Peers form an overlay in an ad-hoc manner, without any guarantees relative to lookup performance or content availability. Resources are locally maintained, while participants have limited knowledge, usually confined to their immediate neighborhood in the overlay. My work aims at providing effective and bandwidth-efficient searching and data sharing. A suite of algorithms which provide peers in unstructured P2P overlays with the state necessary in order to efficiently locate, disseminate and replicate objects is presented. The Adaptive Probabilistic Search (APS) scheme utilizes directed walkers to forward queries on a hop-by-hop basis. Peers store success probabilities for each of their neighbors in order to efficiently route towards object holders. AGNO performs implicit grouping of peers according to the demand incentive and utilizes state maintained by APS in order to route messages from content holders towards interested peers, without requiring any subscription process. Finally, the Adaptive Probabilistic REplication (APRE) scheme expands on the state that AGNO builds in order to replicate content inside query intensive areas according to demand

    Content Distribution in P2P Systems

    Get PDF
    The report provides a literature review of the state-of-the-art for content distribution. The report's contributions are of threefold. First, it gives more insight into traditional Content Distribution Networks (CDN), their requirements and open issues. Second, it discusses Peer-to-Peer (P2P) systems as a cheap and scalable alternative for CDN and extracts their design challenges. Finally, it evaluates the existing P2P systems dedicated for content distribution according to the identied requirements and challenges

    Content Replication and Placement Schemes for Wireless Mesh Networks

    No full text
    Recently, Wireless Mesh Networks (WMNs) have attracted much of interest from both academia and industry, due to their potential to provide an alternative broadband wireless Internet connectivity. However, due to different reasons such as multi-hop forwarding and the dynamic wireless link characteristics, the performance of current WMNs is rather low when clients are soliciting Web contents. Due to the evolution of advanced mobile computing devices; it is anticipated that the demand for bandwidth-onerous popular content (especially multimedia content) in WMNs will dramatically increase in the coming future. Content replication is a popular approach for outsourcing content on behalf of the origin content provider. This area has been well explored in the context of the wired Internet, but has received comparatively less attention from the research community when it comes to WMNs. There are a number of replica placement algorithms that are specifically designed for the Internet. But they do not consider the special features of wireless networks such as insufficient bandwidth, low server capacity, contention to access the wireless medium, etc. This thesis studies the technical challenges encountered when transforming the traditional model of multi-hop WMNs from an access network into a content network. We advance the thesis that support from packet relaying mesh routers to act as replica servers for popular content such as media streaming, results in significant performance improvement. Such support from infrastructure mesh routers benefits from knowledge of the underlying network topology (i.e., information about the physical connections between network nodes is available at mesh routers). The utilization of cross-layer information from lower layers opens the door to developing efficient replication schemes that account for the specific features of WMNs (e.g., contention between the nodes to access the wireless medium and traffic interference). Moreover, this can benefit from the underutilized resources (e.g., storage and bandwidth) at mesh routers. This utilization enables those infrastructure nodes to participate in content distribution and play the role of replica servers. In this thesis, our main contribution is the design of two lightweight, distributed, and scalable object replication schemes for WMNs. The first scheme follows a hierarchical approach, while the second scheme follows a flat one. The challenge is to replicate content as close as possible to the requesting clients and thus, reduce the access latency per object, while minimizing the number of replicas. The two schemes aim to address the questions of where and how many replicas should be placed in the WMN. In our schemes, we consider the underlying topology joint with link-quality metrics to improve the quality of experience. We show using simulation tests that the schemes significantly enhance the performance of a WMN in terms of reducing the access cost, bandwidth consumption and computation/communication cost

    System support for keyword-based search in structured Peer-to-Peer systems

    Get PDF
    In this dissertation, we present protocols for building a distributed search infrastructure over structured Peer-to-Peer systems. Unlike existing search engines which consist of large server farms managed by a centralized authority, our approach makes use of a distributed set of end-hosts built out of commodity hardware. These end-hosts cooperatively construct and maintain the search infrastructure. The main challenges with distributing such a system include node failures, churn, and data migration. Localities inherent in query patterns also cause load imbalances and hot spots that severely impair performance. Users of search systems want their results returned quickly, and in ranked order. Our main contribution is to show that a scalable, robust, and distributed search infrastructure can be built over existing Peer-to-Peer systems through the use of techniques that address these problems. We present a decentralized scheme for ranking search results without prohibitive network or storage overhead. We show that caching allows for efficient query evaluation and present a distributed data structure, called the View Tree, that enables efficient storage, and retrieval of cached results. We also present a lightweight adaptive replication protocol, called LAR that can adapt to different kinds of query streams and is extremely effective at eliminating hotspots. Finally, we present techniques for storing indexes reliably. Our approach is to use an adaptive partitioning protocol to store large indexes and employ efficient redundancy techniques to handle failures. Through detailed analysis and experiments we show that our techniques are efficient and scalable, and that they make distributed search feasible

    Content Distribution in P2P Systems

    Get PDF
    The report provides a literature review of the state-of-the-art for content distribution. The report's contributions are of threefold. First, it gives more insight into traditional Content Distribution Networks (CDN), their requirements and open issues. Second, it discusses Peer-to-Peer (P2P) systems as a cheap and scalable alternative for CDN and extracts their design challenges. Finally, it evaluates the existing P2P systems dedicated for content distribution according to the identied requirements and challenges

    Scalable hosting of web applications

    Get PDF
    Modern Web sites have evolved from simple monolithic systems to complex multitiered systems. In contrast to traditional Web sites, these sites do not simply deliver pre-written content but dynamically generate content using (one or more) multi-tiered Web applications. In this thesis, we addressed the question: How to host multi-tiered Web applications in a scalable manner? Scaling up a Web application requires scaling its individual tiers. To this end, various research works have proposed techniques that employ replication or caching solutions at different tiers. However, most of these techniques aim to optimize the performance of individual tiers and not the entire application. A key observation made in our research is that there exists no elixir technique that performs the best for allWeb applications. Effective hosting of a Web application requires careful selection and deployment of several techniques at different tiers. To this end, we present several caching and replication strategies, such as GlobeCBC, GlobeDB and GlobeTP, to improve the scalability of different tiers of a Web application. While these techniques and systems improve the performance of the individual tiers (and eventually the application), an application's administrator is not only interested in the performance of its individual tiers but also in its endto- end performance. To this end, we propose a resource provisioning approach that allows us to choose the best resource configuration for hosting a Web application such that its end-to-end response time can be optimized with minimum usage of resources. The proposed approach is based on an analytical model for multi-tier systems, which allows us to derive expressions for estimating the mean end-to-end response time and its variance.Steen, M.R. van [Promotor]Pierre, G.E.O. [Copromotor
    corecore