6 research outputs found

    Comparativo entre PVFS2 e GlusterFS em um Ambiente de Computação em Nuvem

    Get PDF
    Este trabalho apresenta um estudo comparativo sobre o desempenho de dois sistemas de arquivos distribuídos, o Parallel Virtual File System 2 (PVFS2) e o GlusterFS, em um ambiente de computação em nuvem. O estudo comparativo baseia-se no teste de medição de vazão relacionada às operações de leitura e escrita realizadas a partir de um cliente para um conjunto de servidores de dados virtualizados. Os resultados tabulados indicam que PVFS2 mantém vazão constante, e próxima a largura de banda disponível, para todos os tamanhos de arquivos e que GlusterFS mantém alta vazão para arquivos de 1MB com queda significativa para arquivos maiores, e nestes casos tem desempenho inferior ao PVFS2

    MemOpLight: Leveraging application feedback to improve container memory consolidation

    Get PDF
    International audienceThe container mechanism amortizes costs by consolidating several servers onto the same machine, while keeping them mutually isolated.Specifically, to ensure performance isolation, Linux relies on memory limits.These limits are static, despite the fact that application needs are dynamic; this results in poor performance.To solve this issue, MemOpLight uses dynamic application feedback to rebalance physical memory allocation between containers focusing on under-performing ones.This paper presents the issues, explains the design of MemOpLight, and validates it experimentally.Our approach increases total satisfaction by 13% compared to the default

    Coordinated VM Resizing and Server Tuning: Throughput, Power Efficiency and Scalability

    Full text link

    Methods to enhance content distribution for very large scale online communities

    Get PDF
    The Internet has experienced an exponential growth in the last years, and its number of users far from decay keeps on growing. Popular Web 2.0 services such as Facebook, YouTube or Twitter among others sum millions of users and employ vast infrastructures deployed worldwide. The size of these infrastructures is getting huge in order to support such a massive number of users. This increment of the infrastructure size has brought new problems regarding scalability, power consumption, cooling, hardware lifetime, underutilization, investment recovery, etc. Owning this kind of infrastructures is not always affordable nor convenient. This could be a major handicap for starting projects with a humble budget whose success is based on reaching a large audience. However, current technologies might permit to deploy vast infrastructures reducing their cost. We refer to peer-to-peer networks and cloud computing. Peer-to-peer systems permit users to yield their own resources to distributed infrastructures. These systems have demonstrated to be a valuable choice capable of distributing vast amounts of data to large audiences with a minimal starting infrastructure. Nevertheless, aspects such as content availability cannot be controlled in these systems, whereas classic server infrastructures can improve this aspect. In the recent time, the cloud has been revealed as a promising paradigm for hosting horizontally scalable Web systems. The cloud offers elastic capabilities that permit to save costs by adapting the number of resources to the incoming demand. Additionally, the cloud makes accessible a vast amount of resources that may be employed on peak workloads. However, how to determine the amount of resources to use remains a challenge. In this thesis, we describe a hierarchical architecture that combines both: peer-to-peer and elastic server infrastructures in order to enhance content distribution. The peer-topeer infrastructure brings a scalable solution that reduces the workload in the servers, while the server infrastructure assures availability and reduces costs varying its size when necessary. We propose a distributed collaborative caching infrastructure that employs a clusterbased locality-aware self-organizing P2P system. This system, leverages collaborative data classification in order to improve content locality. Our evaluation demonstrates that incrementing data locality permits to improve data search while reducing traffic. We explore the utilization of elastic server infrastructures addressing three issues: system sizing, data grouping and content distribution. We propose novel multi-model techniques for hierarchical workload prediction. These predictions are employed to determine the system size and request distribution policies. Additionally, we propose novel techniques for adaptive control that permit to identify inaccurate models and redefine them. Our evaluation using traces extracted from real systems indicate that the utilization of a hierarchy of multiple models increases prediction accuracy. This hierarchy in conjunction with our adaptive control techniques increments the accuracy during unexpected workload variations. Finally, we demonstrate that locality-aware request distribution policies can take advantage of prediction models to adequate content distribution independently of the system size

    Towards auto-scaling in the cloud: online resource allocation techniques

    Get PDF
    Cloud computing provides an easy access to computing resources. Customers can acquire and release resources any time. However, it is not trivial to determine when and how many resources to allocate. Many applications running in the cloud face workload changes that affect their resource demand. The first thought is to plan capacity either for the average load or for the peak load. In the first case there is less cost incurred, but performance will be affected if the peak load occurs. The second case leads to money wastage, since resources will remain underutilized most of the time. Therefore there is a need for a more sophisticated resource provisioning techniques that can automatically scale the application resources according to workload demand and performance constrains. Large cloud providers such as Amazon, Microsoft, RightScale provide auto-scaling services. However, without the proper configuration and testing such services can do more harm than good. In this work I investigate application specific online resource allocation techniques that allow to dynamically adapt to incoming workload, minimize the cost of virtual resources and meet user-specified performance objectives
    corecore