51 research outputs found

    RHAS: robust hybrid auto-scaling for web applications in cloud computing

    Get PDF

    Cloud Instance Management and Resource Prediction For Computation-as-a-Service Platforms

    Get PDF
    Computation-as-a-Service (CaaS) offerings have gained traction in the last few years due to their effectiveness in balancing between the scalability of Software-as-a-Service and the customisation possibilities of Infrastructure-as-a-Service platforms. To function effectively, a CaaS platform must have three key properties: (i) reactive assignment of individual processing tasks to available cloud instances (compute units) according to availability and predetermined time-to-completion (TTC) constraints; (ii) accurate resource prediction; (iii) efficient control of the number of cloud instances servicing workloads, in order to optimize between completing workloads in a timely fashion and reducing resource utilization costs. In this paper, we propose three approaches that satisfy these properties (respectively): (i) a service rate allocation mechanism based on proportional fairness and TTC constraints; (ii) Kalman-filter estimates for resource prediction; and (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of compute units servicing workloads. The integration of our three proposals into a single CaaS platform is shown to provide for more than 27% reduction in Amazon EC2 spot instance cost against methods based on reactive resource prediction and 38% to 60% reduction of the billing cost against the current state-of-the-art in CaaS platforms (Amazon Lambda and Autoscale)

    StratusLab Cloud Distribution

    No full text
    International audienceCloud technologies provide many benefits for scientific and engineering applications, such as customised execution environments, near-instantaneous provisioning, elasticity, and the ability to run user-level services. However, a rapid, wholesale shift to using public, commercial cloud services is unlikely because of capital investments in existing resources and data management issues. To take full advantage of cloud technologies in the short term, institutes and companies must be able to deploy their own cloud infrastructures. The StratusLab project provides a complete, open-source cloud distribution that permits them to do this. The StratusLab services include the computing, storage, and networking services required for an Infrastructure as a Service (IaaS) cloud. It also includes high-level services like the Marketplace that facilitates the sharing of machine images and Claudia that allows the deployment and management of complete software systems

    A Cloud Computing Capability Model for Large-Scale Semantic Annotation

    Get PDF
    Semantic technologies are designed to facilitate context-awareness for web content, enabling machines to understand and process them. However, this has been faced with several challenges, such as disparate nature of existing solutions and lack of scalability in proportion to web scale. With a holistic perspective to web content semantic annotation, this paper focuses on leveraging cloud computing for these challenges. To achieve this, a set of requirements towards holistic semantic annotation on the web is defined and mapped with cloud computing mechanisms to facilitate them. Technical specification for the requirements is critically reviewed and examined against each of the cloud computing mechanisms, in relation to their technical functionalities. Hence, a mapping is established if the cloud computing mechanism's functionalities proffer a solution for implementation of a requirement's technical specification. The result is a cloud computing capability model for holistic semantic annotation which presents an approach towards delivering large scale semantic annotation on the web via a cloud platform

    Cloud instance management and resource prediction for computation-as-a-service platforms

    Get PDF
    Computation-as-a-Service (CaaS) offerings have gained traction in the last few years due to their effectiveness in balancing between the scalability of Software-as-a-Service and the customisation possibilities of Infrastructure-as-a-Service platforms. To function effectively, a CaaS platform must have three key properties: (i) reactive assignment of individual processing tasks to available cloud instances (compute units) according to availability and predetermined time-to-completion (TTC) constraints, (ii) accurate resource prediction, (iii) efficient control of the number of cloud instances servicing workloads, in order to optimize between completing workloads in a timely fashion and reducing resource utilization costs. In this paper, we propose three approaches that satisfy these properties (respectively): (i) a service rate allocation mechanism based on proportional fairness and TTC constraints, (ii) Kalman-filter estimates for resource prediction, and (iii) the use of additive increase multiplicative decrease (AIMD) algorithms (famous for being the resource management in the transport control protocol) for the control of the number of compute units servicing workloads. The integration of our three proposals into a single CaaS platform is shown to provide for more than 27% reduction in Amazon EC2 spot instance cost against methods based on reactive resource prediction and 38% to 60% reduction of the billing cost against the current state-of-the-art in CaaS platforms (Amazon Lambda and Autoscale)
    • …
    corecore