4 research outputs found

    Serverless Computing and Scheduling Tasks on Cloud: A Review

    Get PDF
    Recently, the emergence of Function-as-a-Service (FaaS) has gained increasing attention by researchers. FaaS, also known as serverless computing, is a new concept in cloud computing that allows the services computation that triggers the code execution as a response for certain events. In this paper, we discuss various proposals related to scheduling tasks in clouds. These proposals are categorized according to their objective functions, namely minimizing execution time, minimizing execution cost, or multi objectives (time and cost). The dependency relationships between the tasks plays a vital role in determining the efficiency of the scheduling approach. This dependency may result in resources underutilization. FaaS is expected to have a significant impact on the process of scheduling tasks. This problem can be reduced by adopting a hybrid approach that combines both the benefit of FaaS and Infrastructure-as-a-Service (IaaS). Using FaaS, we can run the small tasks remotely and focus only on scheduling the large tasks. This helps in increasing the utilization of the resources because the small tasks will not be considered during the process of scheduling. An extension of the restricted time limit by cloud vendors will allow running the complete workflow using the serverless architecture, avoiding the scheduling problem

    Fine-grained, adaptive resource sharing for real pay-per-use pricing in clouds

    No full text
    Cloud computing is characterized by its essentially pay-per-use pricing with elasticity. Typically, the granularity of usage for such pricing is at virtual machine (VM) level in IaaS clouds, e.g., a multiple of machine hours. The elasticity and cost effectiveness in these clouds are primarily achieved through the exploitation of resource virtualization and sharing. However, a majority of applications running on VMs in clouds struggle to fully utilize resources allocated to them. Since co-location granularity is strictly restricted to VM level and resources allocated to VMs are space-shared, the unused resources are apt to be wasted while users are still charged for such wastage. In this paper, we address the problem of fine-grained and adaptive resource sharing for real pay-per-use pricing. To this end, we devise a resource management mechanism as a cost efficiency solution for both users and providers of clouds. The mechanism consists of a container-based resource allocator and a real-usage based pricing scheme. We demonstrate the efficacy of this mechanism via experiments, in Amazon EC2, using two typical workloads in clouds, web services and database services, and a compute-intensive high energy physics application. Our results show that the mechanism can achieve near-optimal cost efficiency.8 page(s

    Resource management in a containerized cloud : status and challenges

    Get PDF
    Cloud computing heavily relies on virtualization, as with cloud computing virtual resources are typically leased to the consumer, for example as virtual machines. Efficient management of these virtual resources is of great importance, as it has a direct impact on both the scalability and the operational costs of the cloud environment. Recently, containers are gaining popularity as virtualization technology, due to the minimal overhead compared to traditional virtual machines and the offered portability. Traditional resource management strategies however are typically designed for the allocation and migration of virtual machines, so the question arises how these strategies can be adapted for the management of a containerized cloud. Apart from this, the cloud is also no longer limited to the centrally hosted data center infrastructure. New deployment models have gained maturity, such as fog and mobile edge computing, bringing the cloud closer to the end user. These models could also benefit from container technology, as the newly introduced devices often have limited hardware resources. In this survey, we provide an overview of the current state of the art regarding resource management within the broad sense of cloud computing, complementary to existing surveys in literature. We investigate how research is adapting to the recent evolutions within the cloud, being the adoption of container technology and the introduction of the fog computing conceptual model. Furthermore, we identify several challenges and possible opportunities for future research
    corecore