559 research outputs found

    Resource management in a containerized cloud : status and challenges

    Get PDF
    Cloud computing heavily relies on virtualization, as with cloud computing virtual resources are typically leased to the consumer, for example as virtual machines. Efficient management of these virtual resources is of great importance, as it has a direct impact on both the scalability and the operational costs of the cloud environment. Recently, containers are gaining popularity as virtualization technology, due to the minimal overhead compared to traditional virtual machines and the offered portability. Traditional resource management strategies however are typically designed for the allocation and migration of virtual machines, so the question arises how these strategies can be adapted for the management of a containerized cloud. Apart from this, the cloud is also no longer limited to the centrally hosted data center infrastructure. New deployment models have gained maturity, such as fog and mobile edge computing, bringing the cloud closer to the end user. These models could also benefit from container technology, as the newly introduced devices often have limited hardware resources. In this survey, we provide an overview of the current state of the art regarding resource management within the broad sense of cloud computing, complementary to existing surveys in literature. We investigate how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model. Furthermore, we identify several challenges and possible opportunities for future research

    TOWARDS AN EFFICIENT MULTI-CLOUD OBSERVABILITY FRAMEWORK OF CONTAINERIZED MICROSERVICES IN KUBERNETES PLATFORM

    Get PDF
    A recent trend in software development adopts the paradigm of distributed microservices architecture (MA). Kubernetes, a container-based virtualization platform, has become a de facto environment in which to run MA applications. Organizations may choose to run microservices at several cloud providers to optimize cost and satisfy security concerns. This leads to increased complexity, due to the need to observe the performance characteristics of distributed MA systems. Following a decision guidance models (DGM) approach, this research proposes a decentralized and scalable framework to monitor containerized microservices that run on same or distributed Kubernetes clusters. The framework introduces efficient techniques to gather, distribute, and analyze the observed runtime telemetry data. It offers extensible and cloud-agnostic modules that can exchange data by using a multiplexing, reactive, and non-blocking data streaming approach. An experiment to observe samples of microservices deployed across different cloud platforms was used as a method to evaluate the efficacy and usefulness of the framework. The proposed framework suggests an innovative approach to the development and operations (DevOps) practitioners to observe services across different Kubernetes platforms. It could also serve as a reference architecture for researchers to guide further design options and analysis techniques

    A WOA-based optimization approach for task scheduling in cloud Computing systems

    Get PDF
    Task scheduling in cloud computing can directly affect the resource usage and operational cost of a system. To improve the efficiency of task executions in a cloud, various metaheuristic algorithms, as well as their variations, have been proposed to optimize the scheduling. In this work, for the first time, we apply the latest metaheuristics WOA (the whale optimization algorithm) for cloud task scheduling with a multiobjective optimization model, aiming at improving the performance of a cloud system with given computing resources. On that basis, we propose an advanced approach called IWC (Improved WOA for Cloud task scheduling) to further improve the optimal solution search capability of the WOA-based method. We present the detailed implementation of IWC and our simulation-based experiments show that the proposed IWC has better convergence speed and accuracy in searching for the optimal task scheduling plans, compared to the current metaheuristic algorithms. Moreover, it can also achieve better performance on system resource utilization, in the presence of both small and large-scale tasks

    Towards characterization of edge-cloud continuum

    Full text link
    Internet of Things and cloud computing are two technological paradigms that reached widespread adoption in recent years. These paradigms are complementary: IoT applications often rely on the computational resources of the cloud to process the data generated by IoT devices. The highly distributed nature of IoT applications and the giant amounts of data involved led to significant parts of computation being moved from the centralized cloud to the edge of the network. This gave rise to new hybrid paradigms, such as edge-cloud computing and fog computing. Recent advances in IoT hardware, combined with the continued increase in complexity and variability of the edge-cloud environment, led to an emergence of a new vision of edge-cloud continuum: the next step of integration between the IoT and the cloud, where software components can seamlessly move between the levels of computational hierarchy. However, as this concept is very new, there is still no established view of what exactly it entails. Several views on the future edge-cloud continuum have been proposed, each with its own set of requirements and expected characteristics. In order to move the discussion of this concept forward, these views need to be put into a coherent picture. In this paper, we provide a review and generalization of the existing literature on edge-cloud continuum, point out its expected features, and discuss the challenges that need to be addressed in order to bring about this envisioned environment for the next generation of smart distributed applications
    corecore