6,372 research outputs found

    A Process Migration Approach to Energy-efficient Computation in a cluster of Servers

    Get PDF
    Application processes have to be efficiently performed on servers in a cluster with respect to not only performance but also energy consumption. In this paper, we newly propose a process migration (MG) approach to energy-efficiently performing application processes on servers in a cluster. First, a client issues an application process to a server in a cluster. A process performed on a current server migrates to another server if the server is expected to consume smaller electric energy to perform the process than the current server and the deadline constraint on the process is satisfied on the server. In the evaluation, the total energy consumption of servers is shown to be smaller and the average execution time of each process to be shorter in the MG algorithm than the round robin and random algorithms.修士(工学)法政大学 (Hosei University

    A Process Migration Approach to Energy-efficient Computation in a cluster of Servers

    Get PDF
    Application processes have to be efficiently performed on servers in a cluster with respect to not only performance but also energy consumption. In this paper, we newly propose a process migration (MG) approach to energy-efficiently performing application processes on servers in a cluster. First, a client issues an application process to a server in a cluster. A process performed on a current server migrates to another server if the server is expected to consume smaller electric energy to perform the process than the current server and the deadline constraint on the process is satisfied on the server. In the evaluation, the total energy consumption of servers is shown to be smaller and the average execution time of each process to be shorter in the MG algorithm than the round robin and random algorithms

    Little Boxes: A Dynamic Optimization Approach for Enhanced Cloud Infrastructures

    Full text link
    The increasing demand for diverse, mobile applications with various degrees of Quality of Service requirements meets the increasing elasticity of on-demand resource provisioning in virtualized cloud computing infrastructures. This paper provides a dynamic optimization approach for enhanced cloud infrastructures, based on the concept of cloudlets, which are located at hotspot areas throughout a metropolitan area. In conjunction, we consider classical remote data centers that are rigid with respect to QoS but provide nearly abundant computation resources. Given fluctuating user demands, we optimize the cloudlet placement over a finite time horizon from a cloud infrastructure provider's perspective. By the means of a custom tailed heuristic approach, we are able to reduce the computational effort compared to the exact approach by at least three orders of magnitude, while maintaining a high solution quality with a moderate cost increase of 5.8% or less

    A Taxonomy for Management and Optimization of Multiple Resources in Edge Computing

    Full text link
    Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices, at the edge of the current network. To achieve higher performance in this new paradigm one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis is used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource, and less extensive towards the estimation, discovery and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of non-functional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.Comment: Accepted in the Special Issue Mobile Edge Computing of the Wireless Communications and Mobile Computing journa

    Checkpointing as a Service in Heterogeneous Cloud Environments

    Get PDF
    A non-invasive, cloud-agnostic approach is demonstrated for extending existing cloud platforms to include checkpoint-restart capability. Most cloud platforms currently rely on each application to provide its own fault tolerance. A uniform mechanism within the cloud itself serves two purposes: (a) direct support for long-running jobs, which would otherwise require a custom fault-tolerant mechanism for each application; and (b) the administrative capability to manage an over-subscribed cloud by temporarily swapping out jobs when higher priority jobs arrive. An advantage of this uniform approach is that it also supports parallel and distributed computations, over both TCP and InfiniBand, thus allowing traditional HPC applications to take advantage of an existing cloud infrastructure. Additionally, an integrated health-monitoring mechanism detects when long-running jobs either fail or incur exceptionally low performance, perhaps due to resource starvation, and proactively suspends the job. The cloud-agnostic feature is demonstrated by applying the implementation to two very different cloud platforms: Snooze and OpenStack. The use of a cloud-agnostic architecture also enables, for the first time, migration of applications from one cloud platform to another.Comment: 20 pages, 11 figures, appears in CCGrid, 201
    corecore