12,137 research outputs found
Modelling Energy Consumption based on Resource Utilization
Power management is an expensive and important issue for large computational
infrastructures such as datacenters, large clusters, and computational grids.
However, measuring energy consumption of scalable systems may be impractical
due to both cost and complexity for deploying power metering devices on a large
number of machines. In this paper, we propose the use of information about
resource utilization (e.g. processor, memory, disk operations, and network
traffic) as proxies for estimating power consumption. We employ machine
learning techniques to estimate power consumption using such information which
are provided by common operating systems. Experiments with linear regression,
regression tree, and multilayer perceptron on data from different hardware
resulted into a model with 99.94\% of accuracy and 6.32 watts of error in the
best case.Comment: Submitted to Journal of Supercomputing on 14th June, 201
Power Modelling for Heterogeneous Cloud-Edge Data Centers
Existing power modelling research focuses not on the method used for
developing models but rather on the model itself. This paper aims to develop a
method for deploying power models on emerging processors that will be used, for
example, in cloud-edge data centers. Our research first develops a hardware
counter selection method that appropriately selects counters most correlated to
power on ARM and Intel processors. Then, we propose a two stage power model
that works across multiple architectures. The key results are: (i) the
automated hardware performance counter selection method achieves comparable
selection to the manual selection methods reported in literature, and (ii) the
two stage power model can predict dynamic power more accurately on both ARM and
Intel processors when compared to classic power models.Comment: 10 pages,10 figures,conferenc
On the Estimation of Nonrandom Signal Coefficients from Jittered Samples
This paper examines the problem of estimating the parameters of a bandlimited
signal from samples corrupted by random jitter (timing noise) and additive iid
Gaussian noise, where the signal lies in the span of a finite basis. For the
presented classical estimation problem, the Cramer-Rao lower bound (CRB) is
computed, and an Expectation-Maximization (EM) algorithm approximating the
maximum likelihood (ML) estimator is developed. Simulations are performed to
study the convergence properties of the EM algorithm and compare the
performance both against the CRB and a basic linear estimator. These
simulations demonstrate that by post-processing the jittered samples with the
proposed EM algorithm, greater jitter can be tolerated, potentially reducing
on-chip ADC power consumption substantially.Comment: 11 pages, 8 figure
Feedback and time are essential for the optimal control of computing systems
The performance, reliability, cost, size and energy usage of computing systems can be improved by one or more orders of magnitude by the systematic use of modern control and optimization methods. Computing systems rely on the use of feedback algorithms to schedule tasks, data and resources, but the models that are used to design these algorithms are validated using open-loop metrics. By using closed-loop metrics instead, such as the gap metric developed in the control community, it should be possible to develop improved scheduling algorithms and computing systems that have not been over-engineered. Furthermore, scheduling problems are most naturally formulated as constraint satisfaction or mathematical optimization problems, but these are seldom implemented using state of the art numerical methods, nor do they explicitly take into account the fact that the scheduling problem itself takes time to solve. This paper makes the case that recent results in real-time model predictive control, where optimization problems are solved in order to control a process that evolves in time, are likely to form the basis of scheduling algorithms of the future. We therefore outline some of the research problems and opportunities that could arise by explicitly considering feedback and time when designing optimal scheduling algorithms for computing systems
Semantic Compression for Edge-Assisted Systems
A novel semantic approach to data selection and compression is presented for
the dynamic adaptation of IoT data processing and transmission within "wireless
islands", where a set of sensing devices (sensors) are interconnected through
one-hop wireless links to a computational resource via a local access point.
The core of the proposed technique is a cooperative framework where local
classifiers at the mobile nodes are dynamically crafted and updated based on
the current state of the observed system, the global processing objective and
the characteristics of the sensors and data streams. The edge processor plays a
key role by establishing a link between content and operations within the
distributed system. The local classifiers are designed to filter the data
streams and provide only the needed information to the global classifier at the
edge processor, thus minimizing bandwidth usage. However, the better the
accuracy of these local classifiers, the larger the energy necessary to run
them at the individual sensors. A formulation of the optimization problem for
the dynamic construction of the classifiers under bandwidth and energy
constraints is proposed and demonstrated on a synthetic example.Comment: Presented at the Information Theory and Applications Workshop (ITA),
February 17, 201
Algorithm and Hardware Design of Discrete-Time Spiking Neural Networks Based on Back Propagation with Binary Activations
We present a new back propagation based training algorithm for discrete-time
spiking neural networks (SNN). Inspired by recent deep learning algorithms on
binarized neural networks, binary activation with a straight-through gradient
estimator is used to model the leaky integrate-fire spiking neuron, overcoming
the difficulty in training SNNs using back propagation. Two SNN training
algorithms are proposed: (1) SNN with discontinuous integration, which is
suitable for rate-coded input spikes, and (2) SNN with continuous integration,
which is more general and can handle input spikes with temporal information.
Neuromorphic hardware designed in 40nm CMOS exploits the spike sparsity and
demonstrates high classification accuracy (>98% on MNIST) and low energy
(48.4-773 nJ/image).Comment: 2017 IEEE Biomedical Circuits and Systems (BioCAS
- …