755 research outputs found

    Rise of the Planet of Serverless Computing: A Systematic Review

    Get PDF
    Serverless computing is an emerging cloud computing paradigm, being adopted to develop a wide range of software applications. It allows developers to focus on the application logic in the granularity of function, thereby freeing developers from tedious and error-prone infrastructure management. Meanwhile, its unique characteristic poses new challenges to the development and deployment of serverless-based applications. To tackle these challenges, enormous research efforts have been devoted. This paper provides a comprehensive literature review to characterize the current research state of serverless computing. Specifically, this paper covers 164 papers on 17 research directions of serverless computing, including performance optimization, programming framework, application migration, multi-cloud development, testing and debugging, etc. It also derives research trends, focus, and commonly-used platforms for serverless computing, as well as promising research opportunities

    Reliable Provisioning of Spot Instances for Compute-intensive Applications

    Full text link
    Cloud computing providers are now offering their unused resources for leasing in the spot market, which has been considered the first step towards a full-fledged market economy for computational resources. Spot instances are virtual machines (VMs) available at lower prices than their standard on-demand counterparts. These VMs will run for as long as the current price is lower than the maximum bid price users are willing to pay per hour. Spot instances have been increasingly used for executing compute-intensive applications. In spite of an apparent economical advantage, due to an intermittent nature of biddable resources, application execution times may be prolonged or they may not finish at all. This paper proposes a resource allocation strategy that addresses the problem of running compute-intensive jobs on a pool of intermittent virtual machines, while also aiming to run applications in a fast and economical way. To mitigate potential unavailability periods, a multifaceted fault-aware resource provisioning policy is proposed. Our solution employs price and runtime estimation mechanisms, as well as three fault tolerance techniques, namely checkpointing, task duplication and migration. We evaluate our strategies using trace-driven simulations, which take as input real price variation traces, as well as an application trace from the Parallel Workload Archive. Our results demonstrate the effectiveness of executing applications on spot instances, respecting QoS constraints, despite occasional failures.Comment: 8 pages, 4 figure

    Overlay networks for smart grids

    Get PDF

    Cloud Instance Management and Resource Prediction For Computation-as-a-Service Platforms

    Get PDF
    Computation-as-a-Service (CaaS) offerings have gained traction in the last few years due to their effectiveness in balancing between the scalability of Software-as-a-Service and the customisation possibilities of Infrastructure-as-a-Service platforms. To function effectively, a CaaS platform must have three key properties: (i) reactive assignment of individual processing tasks to available cloud instances (compute units) according to availability and predetermined time-to-completion (TTC) constraints; (ii) accurate resource prediction; (iii) efficient control of the number of cloud instances servicing workloads, in order to optimize between completing workloads in a timely fashion and reducing resource utilization costs. In this paper, we propose three approaches that satisfy these properties (respectively): (i) a service rate allocation mechanism based on proportional fairness and TTC constraints; (ii) Kalman-filter estimates for resource prediction; and (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of compute units servicing workloads. The integration of our three proposals into a single CaaS platform is shown to provide for more than 27% reduction in Amazon EC2 spot instance cost against methods based on reactive resource prediction and 38% to 60% reduction of the billing cost against the current state-of-the-art in CaaS platforms (Amazon Lambda and Autoscale)

    Challenges to support edge-as-a-service

    Get PDF
    A new era in telecommunications is emerging. Virtualized networking functions and resources will offer network operators a way to shift the balance of expenditure from capital to operational, opening up networks to new and innovative services. This article introduces the concept of edge as a service (EaaS), a means of harnessing the flexibility of virtualized network functions and resources to enable network operators to break the tightly coupled relationship they have with their infrastructure and enable more effective ways of generating revenue. To achieve this vision, we envisage a virtualized service access interface that can be used to programmatically alter access network functions and resources available to service providers in an elastic fashion. EaaS has many technically and economically difficult challenges that must be addressed before it can become a reality; the main challenges are summarized in this article

    Centralized Cloud Service Providers in Improving Resource Allocation and Data Integrity by 4G IoT Paradigm

    Get PDF
    Due to the expansion of Internet of Things (IoT), the extensive wireless, and 4G networks, the rising demands for computing calls and data communication for the emergent EC (EC) model. By stirring the functions and services positioned in the cloud to the user proximity, EC could offer robust transmission, networking, storage, and transmission capability. The resource scheduling in EC, which is crucial to the accomplishment of EC system, has gained considerable attention. This manuscript introduces a new lighting attachment algorithm based resource scheduling scheme and data integrity (LAARSS-DI) for 4G IoT environment. In this work, we introduce the LAARSS-DI technique to proficiently handle and allot resources in the 4G IoT environment. In addition, the LAARSS-DI technique mainly relies on the standard LAA where the lightning can be caused using the overall amount of charges saved in the cloud that leads to a rise in electrical intensity. Followed by, the LAARSS-DI technique designs an objective function for the reduction of cost involved in the scheduling process, particularly for 4G IoT environment. A series of experimentation analyses is made and the outcomes are inspected under several aspects. The comparison study shown the improved performance of the LAARSS-DI technology to existing approaches
    • …
    corecore