187 research outputs found

    On a Catalogue of Metrics for Evaluating Commercial Cloud Services

    Full text link
    Given the continually increasing amount of commercial Cloud services in the market, evaluation of different services plays a significant role in cost-benefit analysis or decision making for choosing Cloud Computing. In particular, employing suitable metrics is essential in evaluation implementations. However, to the best of our knowledge, there is not any systematic discussion about metrics for evaluating Cloud services. By using the method of Systematic Literature Review (SLR), we have collected the de facto metrics adopted in the existing Cloud services evaluation work. The collected metrics were arranged following different Cloud service features to be evaluated, which essentially constructed an evaluation metrics catalogue, as shown in this paper. This metrics catalogue can be used to facilitate the future practice and research in the area of Cloud services evaluation. Moreover, considering metrics selection is a prerequisite of benchmark selection in evaluation implementations, this work also supplements the existing research in benchmarking the commercial Cloud services.Comment: 10 pages, Proceedings of the 13th ACM/IEEE International Conference on Grid Computing (Grid 2012), pp. 164-173, Beijing, China, September 20-23, 201

    EC2LAB: SAAS USING AMAZON ELASTIC CLOUD COMPUTE

    Get PDF
    The cloud computing is gaining popularity as it provides an infinite pool of hardware and software resources on demand. The Infrastructure-as-a-Service (IaaS) layer provides the physical resources, and relieves the users from the tedious as well as time consuming task of procuring and setting the server as well as the storage. This project harnesses the capability of the Amazon IaaS layer. The Software-as-a-Service (SaaS) application which is built on top of Amazon IaaS layer, helps the users to easily handle and connect with Amazon\u27s Elastic Cloud Compute (EC2) instances

    Adaptive Load Balancing Using RR and ALB: Resource Provisioning in Cloud

    Get PDF
    Cloud Computing context, load balancing is an issue. With a rise in the number of cloud-based technology users and their need for a broad range of services utilizing resources successfully or effectively in a cloud environment is referred to as load balancing, has become a significant obstacle. Load balancing is crucial in storage systems to increase network capacity and speed up response times. The main goal is to present a new load-balancing mechanism that can balance incoming requests from users all over globally who are in different regions requesting data from remote data sources. This method will combine effective scheduling and cloud-based techniques. A dynamic load balancing method was developed to ensure that cloud environments have the ability to respond rapidly, in addition to running cloud resources efficiently and speeding up job processing times. Applications' incoming traffic is automatically split up across a number of targets, including Amazon EC2 instances, network addresses, and other entities by elastic load balancing. Elastic load balancing offers three distinct classifications of load balancer, and each one provides high availability, intelligent scalability, and robust security to guarantee the error-free functioning of your applications. Application load balancing and round robin are the two load balancing mechanisms in database cloud that are focus of this research study

    Enhanced Model To Minimize Future Downtime Case Study Of Malaysia Cloud Providers Towards Near-Zero Downtime

    Get PDF
    In providing tremendous access to data and computing power of thousands of commodity servers, large-scale cloud systems must address a new challenge: they must detect and recover from a growing number of failures, in both hardware and software components. The growing complexity of technology scaling, manufacturing, design logic, usage, and operating environment increases the occurrence of failures. Unfortunately, downtime handling has proven to be problematic in today’s cloud systems. The downtime recovery path is often complex, under-specified, and tested less frequently than the normal path. As indicated by recent cloud outage incidents, existing large-scale cloud systems are still fragile and error-prone. The purpose of this study to identify the issues causing cloud downtime, to investigate the recovery ability of the database during cloud downtime and to propose enhanced model that can be used to minimize the future downtime

    Comparison of Cloud vs. Tape Backup Performance and Costs with Oracle Database

    Get PDF
    Current practice of backing up data is based on using backup tapes and remote locations for storing data. Nowadays, with the advent of cloud computing a new concept of database backup emerges. The paper presents the possibility of making backup copies of data in the cloud. We are mainly focused on performance and economic issues of making backups in the cloud in comparison to traditional backups. We tested the performance and overall costs of making backup copies of data in Oracle database using Amazon S3 and EC2 cloud services. The costs estimation was performed on the basis of the prices published on Amazon S3 and Amazon EC2 sites

    A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    Get PDF
    AbstractNext generation sequencing (NGS) technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-endNGS analysis requirements. The Globus Genomics system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research

    High-Performance Cloud Computing: A View of Scientific Applications

    Full text link
    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure supports multiple programming paradigms that make Aneka address a variety of different scenarios: from finance applications to computational science. As examples of scientific computing in the Cloud, we present a preliminary case study on using Aneka for the classification of gene expression data and the execution of fMRI brain imaging workflow.Comment: 13 pages, 9 figures, conference pape

    An architecture for secure searchable cloud storage

    Get PDF
    Includes abstract.Includes bibliographical references.Cloud Computing is a relatively new and appealing concept; however, users may not fully trust Cloud Providers with their data and can be reluctant to store their files on Cloud Storage Services. The problem is that Cloud Providers allow users to store their information on the provider's infrastructure with compliance to their terms and conditions, however all security is handled by the provider and generally the details of how this is done are not disclosed. This thesis describes a solution that allows users to securely store data all a public cloud, while also providing a mechanism to allow for searchability through their encrypted data. Users are able to submit encrypted keyword queries and, through a symmetric searchable encryption scheme, the system retrieves a list of files with such keywords contained within the cloud storage medium
    corecore