2 research outputs found
On The Capacity Of Noisy Computations
This paper presents an analysis of the concept of capacity for noisy com-
putations, i.e. functions implemented by unreliable or random devices. An
information theoretic model of noisy computation of a perfect function f
(measurable function between sequence spaces) thanks to an unreliable device
(random channel) F is given: a noisy computation is a product fxF of channels.
A model of reliable computation based on input encoding and output decoding is
also proposed. These models extend those of noisy communication channel and of
reliable communication through a noisy channel. The capacity of a noisy
computation is defined and justified by a coding theorem and a converse. Under
some constraints on the encoding process, capacity is the upper bound of input
rates allowing reliable computation, i.e. decodability of noisy outputs into
expected outputs. These results hold when the one-sided random processes under
concern are asymptotic mean stationary (AMS) and ergodic. In addition, some
characterizations of AMS and ergodic noisy computations are given based on
stability properties of the perfect function f and of the random channel F.
These results are derived from the more general framework of channel products.
Finally, a way to apply the noisy and reliable computation models to cases
where the perfect function f is defined according to a formal computational
model is proposed
Computing Linear Transformations with Unreliable Components
We consider the problem of computing a binary linear transformation using
unreliable components when all circuit components are unreliable. Two noise
models of unreliable components are considered: probabilistic errors and
permanent errors. We introduce the "ENCODED" technique that ensures that the
error probability of the computation of the linear transformation is kept
bounded below a small constant independent of the size of the linear
transformation even when all logic gates in the computation are noisy. Further,
we show that the scheme requires fewer operations (in order sense) than its
"uncoded" counterpart. By deriving a lower bound, we show that in some cases,
the scheme is order-optimal. Using these results, we examine the gain in
energy-efficiency from use of "voltage-scaling" scheme where gate-energy is
reduced by lowering the supply voltage. We use a gate energy-reliability model
to show that tuning gate-energy appropriately at different stages of the
computation ("dynamic" voltage scaling), in conjunction with ENCODED, can lead
to order-sense energy-savings over the classical "uncoded" approach. Finally,
we also examine the problem of computing a linear transformation when noiseless
decoders can be used, providing upper and lower bounds to the problem.Comment: Accepted by Transactions on Information Theory for future publicatio