1,618 research outputs found

    V-Cache: Towards Flexible Resource Provisioning for Multi-tier Applications in IaaS Clouds

    Full text link
    Abstract—Although the resource elasticity offered by Infrastructure-as-a-Service (IaaS) clouds opens up opportunities for elastic application performance, it also poses challenges to application management. Cluster applications, such as multi-tier websites, further complicates the management requiring not only accurate capacity planning but also proper partitioning of the resources into a number of virtual machines. Instead of burdening cloud users with complex management, we move the task of determining the optimal resource configuration for cluster applications to cloud providers. We find that a structural reorganization of multi-tier websites, by adding a caching tier which runs on resources debited from the original resource budget, significantly boosts application performance and reduces resource usage. We propose V-Cache, a machine learning based approach to flexible provisioning of resources for multi-tier applications in clouds. V-Cache transparently places a caching proxy in front of the application. It uses a genetic algorithm to identify the incoming requests that benefit most from caching and dynamically resizes the cache space to accommodate these requests. We develop a reinforcement learning algorithm to optimally allocate the remaining capacity to other tiers. We have implemented V-Cache on a VMware-based cloud testbed. Exper-iment results with the RUBiS and WikiBench benchmarks show that V-Cache outperforms a representative capacity management scheme and a cloud-cache based resource provisioning approach by at least 15 % in performance, and achieves at least 11 % and 21 % savings on CPU and memory resources, respectively. I

    Improving I/O Performance using Cache as a Service on Cloud

    Get PDF
    Caching is gaining popularity in Cloud world. It is one of the key technologies which plays a major role in bridging the performance gap between memory hierarchies through spatial or temporal localities. In cloud systems, heavy I/O activities are associated with different applications. Due to heavy I/O activities, performance is degrading. If caching is implemented, these applications would be benefited the most. The use of a Cache as a Service (CaaS) model as a cost efficient cache solution to the disk I/O problem. We have built the remote-memory based cache that is pluggable and file system independent to support various configurations. The cloud Server process introduce, pricing model together with the elastic cache system. This will increase the disk I/O performance of the IaaS, and it will reduce the usage of the physical machines. DOI: 10.17762/ijritcc2321-8169.150516

    Cloud-based Content Distribution on a Budget

    Full text link
    To leverage the elastic nature of cloud computing, a solution provider must be able to accurately gauge demand for its offering. For applications that involve swarm-to-cloud interactions, gauging such demand is not straightforward. In this paper, we propose a general framework, analyze a mathematical model, and present a prototype implementation of a canonical swarm-to-cloud application, namely peer-assisted content delivery. Our system – called Cyclops – dynamically adjusts the off-cloud bandwidth consumed by content servers (which represents the bulk of the provider's cost) to feed a set of swarming clients, based on a feedback signal that gauges the real-time health of the swarm. Our extensive evaluation of Cyclops in a variety of settings – including controlled PlanetLab and live Internet experiments involving thousands of users – show significant reduction in content distribution costs (by as much as two orders of magnitude) when compared to non-feedback-based swarming solutions, with minor impact on content delivery times

    Split and Migrate: Resource-Driven Placement and Discovery of Microservices at the Edge

    Get PDF
    Microservices architectures combine the use of fine-grained and independently-scalable services with lightweight communication protocols, such as REST calls over HTTP. Microservices bring flexibility to the development and deployment of application back-ends in the cloud. Applications such as collaborative editing tools require frequent interactions between the front-end running on users\u27 machines and a back-end formed of multiple microservices. User-perceived latencies depend on their connection to microservices, but also on the interaction patterns between these services and their databases. Placing services at the edge of the network, closer to the users, is necessary to reduce user-perceived latencies. It is however difficult to decide on the placement of complete stateful microservices at one specific core or edge location without trading between a latency reduction for some users and a latency increase for the others. We present how to dynamically deploy microservices on a combination of core and edge resources to systematically reduce user-perceived latencies. Our approach enables the split of stateful microservices, and the placement of the resulting splits on appropriate core and edge sites. Koala, a decentralized and resource-driven service discovery middleware, enables REST calls to reach and use the appropriate split, with only minimal changes to a legacy microservices application. Locality awareness using network coordinates further enables to automatically migrate services split and follow the location of the users. We confirm the effectiveness of our approach with a full prototype and an application to ShareLatex, a microservices-based collaborative editing application
    • …
    corecore