Neuroscientists formulate very different hypotheses about the nature of
neural code. At one extreme, it has been argued that neurons encode information
in relatively slow changes of individual spikes arriving "rates codes" and the
irregularity in the spike trains reflects noise in the system, while in the
other extreme this irregularity is the temporal codes thus the precise timing
of every spike carries additional information about the input. It is known that
in the estimation of Shannon information the patterns and temporal structures
are taken into account, while the rate code is determined by firing rate. We
compare these types of codes for binary Information Sources which model encoded
spike-trains. Assuming that the information transmitted by a neuron is governed
by uncorrelated stochastic process or by process with a memory we compare the
information transmission rates carried by such spike-trains with their firing
rates. We showed that the crucial role in the relation between information and
firing rates is played by a quantity which we call "jumping" parameter. It
corresponds to the probabilities of transitions from no-spike-state to the
spike-state and vice versa. For low values of jumping parameter the quotient of
information and firing rates is monotonically decreasing function of firing
rate, thus there is straightforward, one-to-one, relation between temporal and
rate codes. On the contrary, it turns out that for large enough jumping
parameter this quotient is non-monotonic function of firing rate and it
exhibits a global maximum, in this case optimal firing rate exists. Moreover,
there is no one-to-one relation between information and firing rates, so the
temporal and rate codes differ qualitatively. This leads to the observation
that the behavior of the quotient of information and firing rates for large
jumping parameter is important in the context of bursting phenomena.Comment: 16 pages, 2 figure