9,466 research outputs found

    Transparent and scalable client-side server selection using netlets

    Get PDF
    Replication of web content in the Internet has been found to improve service response time, performance and reliability offered by web services. When working with such distributed server systems, the location of servers with respect to client nodes is found to affect service response time perceived by clients in addition to server load conditions. This is due to the characteristics of the network path segments through which client requests get routed. Hence, a number of researchers have advocated making server selection decisions at the client-side of the network. In this paper, we present a transparent approach for client-side server selection in the Internet using Netlet services. Netlets are autonomous, nomadic mobile software components which persist and roam in the network independently, providing predefined network services. In this application, Netlet based services embedded with intelligence to support server selection are deployed by servers close to potential client communities to setup dynamic service decision points within the network. An anycast address is used to identify available distributed decision points in the network. Each service decision point transparently directs client requests to the best performing server based on its in-built intelligence supported by real-time measurements from probes sent by the Netlet to each server. It is shown that the resulting system provides a client-side server selection solution which is server-customisable, scalable and fault transparent

    A Highly Available Cluster of Web Servers with Increased Storage Capacity

    Get PDF
    Ponencias de las Decimoséptimas Jornadas de Paralelismo de la Universidad de Castilla-La Mancha celebradas el 18,19 y 20 de septiembre de 2006 en AlbaceteWeb servers scalability has been traditionally solved by improving software elements or increasing hardware resources of the server machine. Another approach has been the usage of distributed architectures. In such architectures, usually, file al- location strategy has been either full replication or full distribution. In previous works we have showed that partial replication offers a good balance between storage capacity and reliability. It offers much higher storage capacity while reliability may be kept at an equivalent level of that from fully replicated solutions. In this paper we present the architectural details of Web cluster solutions adapted to partial replication. We also show that partial replication does not imply a penalty in performance over classical fully replicated architectures. For evaluation purposes we have used a simulation model under the OMNeT++ framework and we use mean service time as a performance comparison metric.Publicad

    MSPlayer: Multi-Source and multi-Path LeverAged YoutubER

    Full text link
    Online video streaming through mobile devices has become extremely popular nowadays. YouTube, for example, reported that the percentage of its traffic streaming to mobile devices has soared from 6% to more than 40% over the past two years. Moreover, people are constantly seeking to stream high quality video for better experience while often suffering from limited bandwidth. Thanks to the rapid deployment of content delivery networks (CDNs), popular videos are now replicated at different sites, and users can stream videos from close-by locations with low latencies. As mobile devices nowadays are equipped with multiple wireless interfaces (e.g., WiFi and 3G/4G), aggregating bandwidth for high definition video streaming has become possible. We propose a client-based video streaming solution, MSPlayer, that takes advantage of multiple video sources as well as multiple network paths through different interfaces. MSPlayer reduces start-up latency and provides high quality video streaming and robust data transport in mobile scenarios. We experimentally demonstrate our solution on a testbed and through the YouTube video service.Comment: accepted to ACM CoNEXT'1

    Tars: Timeliness-aware Adaptive Replica Selection for Key-Value Stores

    Full text link
    In current large-scale distributed key-value stores, a single end-user request may lead to key-value access across tens or hundreds of servers. The tail latency of these key-value accesses is crucial to the user experience and greatly impacts the revenue. To cut the tail latency, it is crucial for clients to choose the fastest replica server as much as possible for the service of each key-value access. Aware of the challenges on the time varying performance across servers and the herd behaviors, an adaptive replica selection scheme C3 is proposed recently. In C3, feedback from individual servers is brought into replica ranking to reflect the time-varying performance of servers, and the distributed rate control and backpressure mechanism is invented. Despite of C3's good performance, we reveal the timeliness issue of C3, which has large impacts on both the replica ranking and the rate control, and propose the Tars (timeliness-aware adaptive replica selection) scheme. Following the same framework as C3, Tars improves the replica ranking by taking the timeliness of the feedback information into consideration, as well as revises the rate control of C3. Simulation results confirm that Tars outperforms C3.Comment: 10pages,submitted to ICDCS 201

    Split and Migrate: Resource-Driven Placement and Discovery of Microservices at the Edge

    Get PDF
    Microservices architectures combine the use of fine-grained and independently-scalable services with lightweight communication protocols, such as REST calls over HTTP. Microservices bring flexibility to the development and deployment of application back-ends in the cloud. Applications such as collaborative editing tools require frequent interactions between the front-end running on users\u27 machines and a back-end formed of multiple microservices. User-perceived latencies depend on their connection to microservices, but also on the interaction patterns between these services and their databases. Placing services at the edge of the network, closer to the users, is necessary to reduce user-perceived latencies. It is however difficult to decide on the placement of complete stateful microservices at one specific core or edge location without trading between a latency reduction for some users and a latency increase for the others. We present how to dynamically deploy microservices on a combination of core and edge resources to systematically reduce user-perceived latencies. Our approach enables the split of stateful microservices, and the placement of the resulting splits on appropriate core and edge sites. Koala, a decentralized and resource-driven service discovery middleware, enables REST calls to reach and use the appropriate split, with only minimal changes to a legacy microservices application. Locality awareness using network coordinates further enables to automatically migrate services split and follow the location of the users. We confirm the effectiveness of our approach with a full prototype and an application to ShareLatex, a microservices-based collaborative editing application

    Peer to Peer Information Retrieval: An Overview

    Get PDF
    Peer-to-peer technology is widely used for file sharing. In the past decade a number of prototype peer-to-peer information retrieval systems have been developed. Unfortunately, none of these have seen widespread real- world adoption and thus, in contrast with file sharing, information retrieval is still dominated by centralised solutions. In this paper we provide an overview of the key challenges for peer-to-peer information retrieval and the work done so far. We want to stimulate and inspire further research to overcome these challenges. This will open the door to the development and large-scale deployment of real-world peer-to-peer information retrieval systems that rival existing centralised client-server solutions in terms of scalability, performance, user satisfaction and freedom
    corecore