39,204 research outputs found

    Cloud-based Image Processing System with Priority-based Data Distribution Mechanism

    Get PDF
    [[abstract]]Most users process short tasks using MapReduce. In other words, most tasks handled by the Map and Reduce functions require low response time. Currently, quite few users use MapReduce for 2D to 3D image processing, which is highly complicated and requires long execution time. However, in our opinion, MapReduce is exactly suitable for processing applications of high complexity and high computation. This paper implements MapReduce on an integrated 2D to 3D multi-user system, in which Map is responsible for image processing procedures of high complexity and high computation, and Reduce is responsible for integrating the intermediate data processed by Map for the final output. Different from short tasks, when several users compete simultaneously to acquire data from MapReduce for 2D to 3D applications, data that waits to be processed by Map will be delayed by the current user and Reduce has to wait until the completion of all Map tasks to generate the final result. Therefore, a novel scheduling scheme, Dynamic Switch of Reduce Function (DSRF) Algorithm, is proposed in this paper for MapReduce to switch dynamically to the next task according to the achieved percentage of tasks and reduce the idle time of Reduce. By using Hadoop to implement our MapReduce platform, we compare the performance of traditional Hadoop with our proposed scheme. The experimental results reveal that our proposed scheduling scheme efficiently enhances MapReduce performance in running 2D to 3D applications.[[incitationindex]]SCI[[booktype]]紙本[[booktype]]電子

    Checkpointing as a Service in Heterogeneous Cloud Environments

    Get PDF
    A non-invasive, cloud-agnostic approach is demonstrated for extending existing cloud platforms to include checkpoint-restart capability. Most cloud platforms currently rely on each application to provide its own fault tolerance. A uniform mechanism within the cloud itself serves two purposes: (a) direct support for long-running jobs, which would otherwise require a custom fault-tolerant mechanism for each application; and (b) the administrative capability to manage an over-subscribed cloud by temporarily swapping out jobs when higher priority jobs arrive. An advantage of this uniform approach is that it also supports parallel and distributed computations, over both TCP and InfiniBand, thus allowing traditional HPC applications to take advantage of an existing cloud infrastructure. Additionally, an integrated health-monitoring mechanism detects when long-running jobs either fail or incur exceptionally low performance, perhaps due to resource starvation, and proactively suspends the job. The cloud-agnostic feature is demonstrated by applying the implementation to two very different cloud platforms: Snooze and OpenStack. The use of a cloud-agnostic architecture also enables, for the first time, migration of applications from one cloud platform to another.Comment: 20 pages, 11 figures, appears in CCGrid, 201

    A Game-theoretic Framework for Revenue Sharing in Edge-Cloud Computing System

    Full text link
    We introduce a game-theoretic framework to ex- plore revenue sharing in an Edge-Cloud computing system, in which computing service providers at the edge of the Internet (edge providers) and computing service providers at the cloud (cloud providers) co-exist and collectively provide computing resources to clients (e.g., end users or applications) at the edge. Different from traditional cloud computing, the providers in an Edge-Cloud system are independent and self-interested. To achieve high system-level efficiency, the manager of the system adopts a task distribution mechanism to maximize the total revenue received from clients and also adopts a revenue sharing mechanism to split the received revenue among computing servers (and hence service providers). Under those system-level mechanisms, service providers attempt to game with the system in order to maximize their own utilities, by strategically allocating their resources (e.g., computing servers). Our framework models the competition among the providers in an Edge-Cloud system as a non-cooperative game. Our simulations and experiments on an emulation system have shown the existence of Nash equilibrium in such a game. We find that revenue sharing mechanisms have a significant impact on the system-level efficiency at Nash equilibria, and surprisingly the revenue sharing mechanism based directly on actual contributions can result in significantly worse system efficiency than Shapley value sharing mechanism and Ortmann proportional sharing mechanism. Our framework provides an effective economics approach to understanding and designing efficient Edge-Cloud computing systems

    HTC Scientific Computing in a Distributed Cloud Environment

    Full text link
    This paper describes the use of a distributed cloud computing system for high-throughput computing (HTC) scientific applications. The distributed cloud computing system is composed of a number of separate Infrastructure-as-a-Service (IaaS) clouds that are utilized in a unified infrastructure. The distributed cloud has been in production-quality operation for two years with approximately 500,000 completed jobs where a typical workload has 500 simultaneous embarrassingly-parallel jobs that run for approximately 12 hours. We review the design and implementation of the system which is based on pre-existing components and a number of custom components. We discuss the operation of the system, and describe our plans for the expansion to more sites and increased computing capacity
    corecore