16 research outputs found
Predicting the Performance of a Computing System with Deep Networks
Predicting the performance and energy consumption of computing hardware is critical for many modern applications. This will inform procurement decisions, deployment decisions, and autonomic scaling. Existing approaches to understanding the performance of hardware largely focus around benchmarking – leveraging standardised workloads which seek to be representative of an end-user’s needs. Two key challenges are present; benchmark workloads may not be representative of an end-user’s workload, and benchmark scores are not easily obtained for all hardware. Within this paper, we demonstrate the potential to build Deep Learning models to predict benchmark scores for unseen hardware. We undertake our evaluation with the openly available SPEC 2017 benchmark results. We evaluate three different networks, one fully-connected network along with two Convolutional Neural Networks (one bespoke and one ResNet inspired) and demonstrate impressive 2 scores of 0.96, 0.98 and 0.94 respectively
All-optical image denoising using a diffractive visual processor
Image denoising, one of the essential inverse problems, targets to remove
noise/artifacts from input images. In general, digital image denoising
algorithms, executed on computers, present latency due to several iterations
implemented in, e.g., graphics processing units (GPUs). While deep
learning-enabled methods can operate non-iteratively, they also introduce
latency and impose a significant computational burden, leading to increased
power consumption. Here, we introduce an analog diffractive image denoiser to
all-optically and non-iteratively clean various forms of noise and artifacts
from input images - implemented at the speed of light propagation within a thin
diffractive visual processor. This all-optical image denoiser comprises passive
transmissive layers optimized using deep learning to physically scatter the
optical modes that represent various noise features, causing them to miss the
output image Field-of-View (FoV) while retaining the object features of
interest. Our results show that these diffractive denoisers can efficiently
remove salt and pepper noise and image rendering-related spatial artifacts from
input phase or intensity images while achieving an output power efficiency of
~30-40%. We experimentally demonstrated the effectiveness of this analog
denoiser architecture using a 3D-printed diffractive visual processor operating
at the terahertz spectrum. Owing to their speed, power-efficiency, and minimal
computational overhead, all-optical diffractive denoisers can be transformative
for various image display and projection systems, including, e.g., holographic
displays.Comment: 21 Pages, 7 Figure
Multirate Training of Neural Networks
We propose multirate training of neural networks: partitioning neural network
parameters into "fast" and "slow" parts which are trained simultaneously using
different learning rates. By choosing appropriate partitionings we can obtain
large computational speed-ups for transfer learning tasks. We show that for
various transfer learning applications in vision and NLP we can fine-tune deep
neural networks in almost half the time, without reducing the generalization
performance of the resulting model. We also discuss other splitting choices for
the neural network parameters which are beneficial in enhancing generalization
performance in settings where neural networks are trained from scratch.
Finally, we propose an additional multirate technique which can learn different
features present in the data by training the full network on different time
scales simultaneously. The benefits of using this approach are illustrated for
ResNet architectures on image data. Our paper unlocks the potential of using
multirate techniques for neural network training and provides many starting
points for future work in this area