10 research outputs found

    Information logistics and fog computing: The DITAS∗ approach

    Get PDF
    Data-intensive applications are usually developed based on Cloud resources whose service delivery model helps towards building reliable and scalable solutions. However, especially in the context of Internet of Things-based applications, Cloud Computing comes with some limitations as data, generated at the edge of the network, are processed at the core of the network producing security, privacy, and latency issues. On the other side, Fog Computing is emerging as an extension of Cloud Computing, where resources located at the edge of the network are used in combination with cloud services. The goal of this paper is to present the approach adopted in the recently started DITAS project: the design of a Cloud platform is proposed to optimize the development of data-intensive applications providing information logistics tools that are able to deliver information and computation resources at the right time, right place, with the right quality. Applications that will be developed with DITAS tools live in a Fog Computing environment, where data move from the cloud to the edge and vice versa to provide secure, reliable, and scalable solutions with excellent performance

    Usage centric green performance indicators

    Full text link
    Energy efficiency of data centers is gaining importance as energy consumption and carbon footprint awareness are rising. Green Performance Indicators (GPIs) provide measurable means to assess the energy efficiency of a resource or system. Most of the metrics commonly used today measure the energy efficiency potential of a resource, system or application usage, rather than the energy efficiency of the actual usage. In this paper, we argue that the way that the resources and systems are actually used in a given data center configuration is at least as important as the efficiency potential of the raw resources or systems. Hence, for data center energy efficiency, we suggest to both select energy efficient components (as done today), as well as optimize the actual usage of the components and systems in the data center. To achieve the latter, optimization of usage centric GPI metrics should be employed and targeted as a primary green goal. In this paper we identify and present usage centric metrics, which should be monitored and optimized for improving energy efficiency, and hence, reduce the data center carbon footprint

    When Consensus Meets Self-stabilization

    No full text
    This paper presents a self-stabilizing failure detector, asynchronous consensus and replicated state-machine algorithm suite, the components of which can be started in an arbitrary state and converge to act as a virtual state-machine. Self-stabilizing algorithms can cope with transient faults. Transient faults can alter the system state to an arbitrary state and hence, cause a temporary violation of the safety property of the consensus. New requirements for consensus that fit the on-going nature of self-stabilizing algorithms are presented. The wait-free consensus (and the replicated state-machine) algorithm presented is a classic combination of a failure detector and a (memory bounded) rotating coordinator consensus that satisfy both eventual safety and eventual liveness. Several new techniques and paradigms are introduced. The bounded memory failure detector abstracts away synchronization assumptions using bounded heartbeat counters combined with a balance-unbalance mechanism. The practically infinite paradigm is introduced in the scope of self-stabilization, where an execution of, say, 264 sequential steps is regarded as (practically) infinite. Finally, we present the first self-stabilizing wait-free reset mechanism that ensures eventual safety and can be used in other scopes

    When Consensus Meets Self-stabilization

    No full text
    This paper presents a self-stabilizing failure detector, asynchronous consensus and replicated state-machine algorithm suite, the components of which can be started in an arbitrary state and converge to act as a virtual state-machine. Self-stabilizing algorithms can cope with transient faults. Transient faults can alter the system state to an arbitrary state and hence, cause a temporary violation of the safety property of the consensus. New requirements for consensus that fit the on-going nature of self-stabilizing algorithms are presented. The wait-free consensus (and the replicated state-machine) algorithm presented is a classic combination of a failure detector and a (memory bounded) rotating coordinator consensus that satisfy both eventual safety and eventual liveness. Several new techniques and paradigms are introduced. The bounded memory failure detector abstracts away synchronization assumptions using bounded heartbeat counters combined with a balance-unbalance mechanism. The practically infinite paradigm is introduced in the scope of self-stabilization, where an execution of, say, 264 sequential steps is regarded as (practically) infinite. Finally, we present the first self-stabilizing wait-free reset mechanism that ensures eventual safety and can be used in other scopes

    Leveraging disk drive acoustic modes for power management

    No full text
    Reduction of disk drive power consumption is a challenging task, particularly since the most prevalent way of achieving it, powering down idle disks, has many undesirable side-effects. Some hard disk drives support acoustic modes, meaning they can be configured to reduce the acceleration and velocity of the disk head. This reduces instantaneous power consumption but sacrifices performance. As a result, input/output (I/O) operations run longer at reduced power. This is useful for power capping since it causes significant reduction in peak power consumption of the disks. We conducted experiments on several disk drives that support acoustic management. Most of these disk drives support only two modes- quiet and normal. We ran different I/O workloads, including SPC-1 to simulate a real-world online transaction processing workload. We found that the reduction in peak power can reach up to 23 % when using quiet mode. We show that for some workloads this translates into a reduction of 12.5 % in overall energy consumption. In other workloads we encountered the opposite phenomenon-an increase of more than 6 % in the overall energy consumption

    Usage centric green performance indicators

    No full text
    Energy efficiency of data centers is gaining importance as energy consumption and carbon footprint awareness are rising. Green Performance Indicators (GPIs) provide measurable means to assess the energy efficiency of a resource or system. Most of the metrics commonly used today measure the energy efficiency potential of a resource, system or application usage, rather than the energy efficiency of the actual usage. In this paper, we argue that the way that the resources and systems are actually used in a given data center configuration is at least as important as the efficiency potential of the raw resources or systems. Hence, for data center energy efficiency, we suggest to both select energy efficient components (as done today), as well as optimize the actual usage of the components and systems in the data center. To achieve the latter, optimization of usage centric GPI metrics should be employed and targeted as a primary green goal. In this paper we identify and present usage centric metrics, which should be monitored and optimized for improving energy efficiency, and hence, reduce the data center carbon footprint

    Usage centric green performance indicators

    No full text
    Abstract Energy efficiency of data centers is gaining importance as energy consumption and carbon footprint awareness are rising. Green Performance Indicators (GPIs) provide measurable means to assess the energy efficiency of a resource or system. Most of the metrics commonly used today measure the energy efficiency potential of a resource, system or application usage, rather than the energy efficiency of the actual usage. In this paper, we argue that the way that the resources and systems are actually used in a given data center configuration is at least as important as the efficiency potential of the raw resources or systems. Hence, for data center energy efficiency, we suggest to both select energy efficient components (as done today), as well as optimize the actual usage of the components and systems in the data center. To achieve the latter, optimization of usage centric GPI metrics should be employed and targeted as a primary green goal. In this paper we identify and present usage centric metrics, which should be monitored and optimized for improving energy efficiency, and hence, reduce the data center carbon footprint
    corecore