688,118 research outputs found
3E: Energy-Efficient Elastic Scheduling for Independent Tasks in Heterogeneous Computing Systems
Reducing energy consumption is a major design constraint for modern heterogeneous computing systems to minimize electricity cost, improve system reliability and protect environment. Conventional energy-efficient scheduling strategies developed on these systems do not sufficiently exploit the system elasticity and adaptability for maximum energy savings, and do not simultaneously take account of user expected finish time. In this paper, we develop a novel scheduling strategy named energy-efficient elastic (3E) scheduling for aperiodic, independent and non-real-time tasks with user expected finish times on DVFS-enabled heterogeneous computing systems. The 3E strategy adjusts processors’ supply voltages and frequencies according to the system workload, and makes trade-offs between energy consumption and user expected finish times. Compared with other energy-efficient strategies, 3E significantly improves the scheduling quality and effectively enhances the system elasticity
Convolutional Networks for Fast, Energy-Efficient Neuromorphic Computing
Deep networks are now able to achieve human-level performance on a broad
spectrum of recognition tasks. Independently, neuromorphic computing has now
demonstrated unprecedented energy-efficiency through a new chip architecture
based on spiking neurons, low precision synapses, and a scalable communication
network. Here, we demonstrate that neuromorphic computing, despite its novel
architectural primitives, can implement deep convolution networks that i)
approach state-of-the-art classification accuracy across 8 standard datasets,
encompassing vision and speech, ii) perform inference while preserving the
hardware's underlying energy-efficiency and high throughput, running on the
aforementioned datasets at between 1200 and 2600 frames per second and using
between 25 and 275 mW (effectively > 6000 frames / sec / W) and iii) can be
specified and trained using backpropagation with the same ease-of-use as
contemporary deep learning. For the first time, the algorithmic power of deep
learning can be merged with the efficiency of neuromorphic processors, bringing
the promise of embedded, intelligent, brain-inspired computing one step closer.Comment: 7 pages, 6 figure
Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges
Cloud computing is offering utility-oriented IT services to users worldwide.
Based on a pay-as-you-go model, it enables hosting of pervasive applications
from consumer, scientific, and business domains. However, data centers hosting
Cloud applications consume huge amounts of energy, contributing to high
operational costs and carbon footprints to the environment. Therefore, we need
Green Cloud computing solutions that can not only save energy for the
environment but also reduce operational costs. This paper presents vision,
challenges, and architectural elements for energy-efficient management of Cloud
computing environments. We focus on the development of dynamic resource
provisioning and allocation algorithms that consider the synergy between
various data center infrastructures (i.e., the hardware, power units, cooling
and software), and holistically work to boost data center energy efficiency and
performance. In particular, this paper proposes (a) architectural principles
for energy-efficient management of Clouds; (b) energy-efficient resource
allocation policies and scheduling algorithms considering quality-of-service
expectations, and devices power usage characteristics; and (c) a novel software
technology for energy-efficient management of Clouds. We have validated our
approach by conducting a set of rigorous performance evaluation study using the
CloudSim toolkit. The results demonstrate that Cloud computing model has
immense potential as it offers significant performance gains as regards to
response time and cost saving under dynamic workload scenarios.Comment: 12 pages, 5 figures,Proceedings of the 2010 International Conference
on Parallel and Distributed Processing Techniques and Applications (PDPTA
2010), Las Vegas, USA, July 12-15, 201
EPOBF: Energy Efficient Allocation of Virtual Machines in High Performance Computing Cloud
Cloud computing has become more popular in provision of computing resources
under virtual machine (VM) abstraction for high performance computing (HPC)
users to run their applications. A HPC cloud is such cloud computing
environment. One of challenges of energy efficient resource allocation for VMs
in HPC cloud is tradeoff between minimizing total energy consumption of
physical machines (PMs) and satisfying Quality of Service (e.g. performance).
On one hand, cloud providers want to maximize their profit by reducing the
power cost (e.g. using the smallest number of running PMs). On the other hand,
cloud customers (users) want highest performance for their applications. In
this paper, we focus on the scenario that scheduler does not know global
information about user jobs and user applications in the future. Users will
request shortterm resources at fixed start times and non interrupted durations.
We then propose a new allocation heuristic (named Energy-aware and Performance
per watt oriented Bestfit (EPOBF)) that uses metric of performance per watt to
choose which most energy-efficient PM for mapping each VM (e.g. maximum of MIPS
per Watt). Using information from Feitelson's Parallel Workload Archive to
model HPC jobs, we compare the proposed EPOBF to state of the art heuristics on
heterogeneous PMs (each PM has multicore CPU). Simulations show that the EPOBF
can reduce significant total energy consumption in comparison with state of the
art allocation heuristics.Comment: 10 pages, in Procedings of International Conference on Advanced
Computing and Applications, Journal of Science and Technology, Vietnamese
Academy of Science and Technology, ISSN 0866-708X, Vol. 51, No. 4B, 201
LEGaTO: first steps towards energy-efficient toolset for heterogeneous computing
LEGaTO is a three-year EU H2020 project which started in December 2017. The LEGaTO project will leverage task-based programming models to provide a software ecosystem for Made-in-Europe heterogeneous hardware composed of CPUs, GPUs, FPGAs and dataflow engines. The aim is to attain one order of magnitude energy savings from the edge to the converged cloud/HPC.Peer ReviewedPostprint (author's final draft
Optimal Resource Allocation in Ultra-low Power Fog-computing SWIPT-based Networks
In this paper, we consider a fog computing system consisting of a
multi-antenna access point (AP), an ultra-low power (ULP) single antenna device
and a fog server. The ULP device is assumed to be capable of both energy
harvesting (EH) and information decoding (ID) using a time-switching
simultaneous wireless information and power transfer (SWIPT) scheme. The ULP
device deploys the harvested energy for ID and either local computing or
offloading the computations to the fog server depending on which strategy is
most energy efficient. In this scenario, we optimize the time slots devoted to
EH, ID and local computation as well as the time slot and power required for
the offloading to minimize the energy cost of the ULP device. Numerical results
are provided to study the effectiveness of the optimized fog computing system
and the relevant challenges
- …
