2 research outputs found
Energy-Reliability Limits in Nanoscale Feedforward Neural Networks and Formulas
Due to energy-efficiency requirements, computational systems are now being
implemented using noisy nanoscale semiconductor devices whose reliability
depends on energy consumed. We study circuit-level energy-reliability limits
for deep feedforward neural networks (multilayer perceptrons) built using such
devices, and en route also establish the same limits for formulas (boolean
tree-structured circuits). To obtain energy lower bounds, we extend Pippenger's
mutual information propagation technique for characterizing the complexity of
noisy circuits, since small circuit complexity need not imply low energy. Many
device technologies require all gates to have the same electrical operating
point; in circuits of such uniform gates, we show that the minimum energy
required to achieve any non-trivial reliability scales superlinearly with the
number of inputs. Circuits implemented in emerging device technologies like
spin electronics can, however, have gates operate at different electrical
points; in circuits of such heterogeneous gates, we show energy scaling can be
linear in the number of inputs. Building on our extended mutual information
propagation technique and using crucial insights from convex optimization
theory, we develop an algorithm to compute energy lower bounds for any given
boolean tree under heterogeneous gates. This algorithm runs in linear time in
number of gates, and is therefore practical for modern circuit design. As part
of our development we find a simple procedure for energy allocation across
circuit gates with different operating points and neural networks with
differently-operating layers.Comment: To appear in IEEE Journal on Selected Areas in Information Theory
(special issue on deep learning
Exploiting Errors for Efficiency: A Survey from Circuits to Algorithms
When a computational task tolerates a relaxation of its specification or when
an algorithm tolerates the effects of noise in its execution, hardware,
programming languages, and system software can trade deviations from correct
behavior for lower resource usage. We present, for the first time, a synthesis
of research results on computing systems that only make as many errors as their
users can tolerate, from across the disciplines of computer aided design of
circuits, digital system design, computer architecture, programming languages,
operating systems, and information theory.
Rather than over-provisioning resources at each layer to avoid errors, it can
be more efficient to exploit the masking of errors occurring at one layer which
can prevent them from propagating to a higher layer. We survey tradeoffs for
individual layers of computing systems from the circuit level to the operating
system level and illustrate the potential benefits of end-to-end approaches
using two illustrative examples. To tie together the survey, we present a
consistent formalization of terminology, across the layers, which does not
significantly deviate from the terminology traditionally used by research
communities in their layer of focus.Comment: 35 page