5 research outputs found

    Investigating into Cloud Resource Management Mechanisms

    No full text
    Driven by the rapid growth of the demand for efficient and economical computational power, cloud computing has led the world into a new era. It delivers computing resources as services, whereby shared resources are provided to cloud users over the network in order to offer dynamic flexible resource provisioning for reliable and guaranteed services by using pay-as-you-use pricing model. Since multiple cloud users can request cloud resources simultaneously, cloud resource management mechanisms must operate in an efficient manner to satisfy demand of cloud users. Therefore, investigating cloud resource management mechanisms to achieve cloud resource efficiency is one of key elements that benefits both cloud providers and users. In this thesis, we present cloud resource management mechanisms for two different cloud infrastructures, i.e. virtual machine-based (VM-based) and application-based infrastructure. The VM-based infrastructure is an infrastructure that provides multi-tenancy for cloud users at VM-level, i.e. each cloud user directly controls their VMs in the cloud environment. The application-based infrastructure provides multi-tenancy at application level, in the other word, each cloud user directly control their applications in the cloud environment. For the VM-based infrastructure, we introduce two heuristics metrics to capture multi-dimensional characteristics of logical machines. By using a multivariate probabilistic model, we develop an algorithm to improve resource utilisation for the VM-based infrastructure. We then designed and implemented an application-based infrastructure called Elastic Application Container system (EAC system) to support multi-tenant cloud use. Based on the characteristics of the application-based and the VM-based infrastructure, we developed auto-scaling algorithms that can automatically scale cloud resources in the EAC system. In general, the cloud resource management mechanisms proposed in this thesis aims to investigate resource management mechanisms for cloud resource utilisation in the VM-based infrastructure and to provide suitable cloud resource provisioning mechanisms for the application-based infrastructure.Imperial Users Onl

    Partage efficace des ressources de calcul dans le nuage informatique

    Get PDF
    L'informatique en nuage ou l'infonuagique est apparue comme un nouveau paradigm capable de gérer une infrastructure informatique à grande échelle. Toutefois, la plupart des infrastructures infonuagiques existantes ne sont pas exploitées efficacement, et la surprovision de ressources est un problème émergent. En raison des exigences variables au cours du temps des ressources virtuelles, les plateformes physiques pourraient être utilisées de manière inadéquate, ce qui entraîne des coûts opérationnels supplémentaires. Des techniques de migration ont été proposées pour améliorer l'utilisation des ressources physiques, par exemple la consolidation des ressources virtuelles sur les ressources physiques. Les travaux antérieurs motivés par des objectifs énergétiques et d'équilibrage de charge sont souvent limités à une seule technique de migration. Ce mémoire présente un modèle de migration optimisé pour les machines virtuelles basé sur l’état courant et futur de l’utilisation des ressources physiques tout en considérant les différentes techniques de migration. Cette future utilisation s’appuie sur la prédiction de la charge de travail. Notre modèle a pour fin de minimiser le coût opérationnel lors du partage de l'infrastructure sousjacente. L'expérimentation menée dans le cadre de ce mémoire montre la capacité de notre modèle pour réaliser un meilleur partage des ressources physique et ce, en réduisant le coût opérationnel de 16% en comparaison à une solution existante. L’expérimentation montre également que l’intégration de différentes techniques de migration dans le même modèle implique une optimisation globale par rapport à l’intégration d’une seule technique

    Geo-distributed Edge and Cloud Resource Management for Low-latency Stream Processing

    Get PDF
    The proliferation of Internet-of-Things (IoT) devices is rapidly increasing the demands for efficient processing of low latency stream data generated close to the edge of the network. Edge Computing provides a layer of infrastructure to fill latency gaps between the IoT devices and the back-end cloud computing infrastructure. A large number of IoT applications require continuous processing of data streams in real-time. Edge computing-based stream processing techniques that carefully consider the heterogeneity of the computing and network resources available in the geo-distributed infrastructure provide significant benefits in optimizing the throughput and end-to-end latency of the data streams. Managing geo-distributed resources operated by individual service providers raises new challenges in terms of effective global resource sharing and achieving global efficiency in the resource allocation process. In this dissertation, we present a distributed stream processing framework that optimizes the performance of stream processing applications through a careful allocation of computing and network resources available at the edge of the network. The proposed approach differentiates itself from the state-of-the-art through its careful consideration of data locality and resource constraints during physical plan generation and operator placement for the stream queries. Additionally, it considers co-flow dependencies that exist between the data streams to optimize the network resource allocation through an application-level rate control mechanism. The proposed framework incorporates resilience through a cost-aware partial active replication strategy that minimizes the recovery cost when applications incur failures. The framework employs a reinforcement learning-based online learning model for dynamically determining the level of parallelism to adapt to changing workload conditions. The second dimension of this dissertation proposes a novel model for allocating computing resources in edge and cloud computing environments. In edge computing environments, it allows service providers to establish resource sharing contracts with infrastructure providers apriori in a latency-aware manner. In geo-distributed cloud environments, it allows cloud service providers to establish resource sharing contracts with individual datacenters apriori for defined time intervals in a cost-aware manner. Based on these mechanisms, we develop a decentralized implementation of the contract-based resource allocation model for geo-distributed resources using Smart Contracts in Ethereum
    corecore