12 research outputs found
Back-propagation of accuracy
In this paper we solve the problem: how to determine maximal allowable
errors, possible for signals and parameters of each element of a network
proceeding from the condition that the vector of output signals of the network
should be calculated with given accuracy? "Back-propagation of accuracy" is
developed to solve this problem. The calculation of allowable errors for each
element of network by back-propagation of accuracy is surprisingly similar to a
back-propagation of error, because it is the backward signals motion, but at
the same time it is very different because the new rules of signals
transformation in the passing back through the elements are different. The
method allows us to formulate the requirements to the accuracy of calculations
and to the realization of technical devices, if the requirements to the
accuracy of output signals of the network are known.Comment: 4 pages, 5 figures, The talk given on ICNN97 (The 1997 IEEE
International Conference on Neural Networks, Houston, USA
Computable randomness is about more than probabilities
We introduce a notion of computable randomness for infinite sequences that
generalises the classical version in two important ways. First, our definition
of computable randomness is associated with imprecise probability models, in
the sense that we consider lower expectations (or sets of probabilities)
instead of classical 'precise' probabilities. Secondly, instead of binary
sequences, we consider sequences whose elements take values in some finite
sample space. Interestingly, we find that every sequence is computably random
with respect to at least one lower expectation, and that lower expectations
that are more informative have fewer computably random sequences. This leads to
the intriguing question whether every sequence is computably random with
respect to a unique most informative lower expectation. We study this question
in some detail and provide a partial answer
Computable randomness is about more than probabilities
We introduce a notion of computable randomness for infinite sequences that generalises the classical version in two important ways. First, our definition of computable randomness is associated with imprecise probability models, in the sense that we consider lower expectations (or sets of probabilities) instead of classical 'precise' probabilities. Secondly, instead of binary sequences, we consider sequences whose elements take values in some finite sample space. Interestingly, we find that every sequence is computably random with respect to at least one lower expectation, and that lower expectations that are more informative have fewer computably random sequences. This leads to the intriguing question whether every sequence is computably random with respect to a unique most informative lower expectation. We study this question in some detail and provide a partial answer