1,005 research outputs found
Combinatorial Information Theory: I. Philosophical Basis of Cross-Entropy and Entropy
This study critically analyses the information-theoretic, axiomatic and
combinatorial philosophical bases of the entropy and cross-entropy concepts.
The combinatorial basis is shown to be the most fundamental (most primitive) of
these three bases, since it gives (i) a derivation for the Kullback-Leibler
cross-entropy and Shannon entropy functions, as simplified forms of the
multinomial distribution subject to the Stirling approximation; (ii) an
explanation for the need to maximize entropy (or minimize cross-entropy) to
find the most probable realization; and (iii) new, generalized definitions of
entropy and cross-entropy - supersets of the Boltzmann principle - applicable
to non-multinomial systems. The combinatorial basis is therefore of much
broader scope, with far greater power of application, than the
information-theoretic and axiomatic bases. The generalized definitions underpin
a new discipline of ``{\it combinatorial information theory}'', for the
analysis of probabilistic systems of any type.
Jaynes' generic formulation of statistical mechanics for multinomial systems
is re-examined in light of the combinatorial approach. (abbreviated abstract)Comment: 45 pp; 1 figure; REVTex; updated version 5 (incremental changes
Jaynes' MaxEnt, Steady State Flow Systems and the Maximum Entropy Production Principle
Jaynes' maximum entropy (MaxEnt) principle was recently used to give a
conditional, local derivation of the ``maximum entropy production'' (MEP)
principle, which states that a flow system with fixed flow(s) or gradient(s)
will converge to a steady state of maximum production of thermodynamic entropy
(R.K. Niven, Phys. Rev. E, in press). The analysis provides a steady state
analog of the MaxEnt formulation of equilibrium thermodynamics, applicable to
many complex flow systems at steady state. The present study examines the
classification of physical systems, with emphasis on the choice of constraints
in MaxEnt. The discussion clarifies the distinction between equilibrium, fluid
flow, source/sink, flow/reactive and other systems, leading into an appraisal
of the application of MaxEnt to steady state flow and reactive systems.Comment: 6 pages; paper for MaxEnt0
Combinatorial Entropies and Statistics
We examine the {combinatorial} or {probabilistic} definition ("Boltzmann's
principle") of the entropy or cross-entropy function
or , where is the statistical weight
and the probability of a given realization of a system.
Extremisation of or , subject to any constraints, thus selects the "most
probable" (MaxProb) realization. If the system is multinomial, converges
asymptotically (for number of entities N \back \to \back \infty) to the
Kullback-Leibler cross-entropy ; for equiprobable categories in a
system, converges to the Shannon entropy . However, in many cases
or is not multinomial and/or does not satisfy an
asymptotic limit. Such systems cannot meaningfully be analysed with or
, but can be analysed directly by MaxProb. This study reviews several
examples, including (a) non-asymptotic systems; (b) systems with
indistinguishable entities (quantum statistics); (c) systems with
indistinguishable categories; (d) systems represented by urn models, such as
"neither independent nor identically distributed" (ninid) sampling; and (e)
systems representable in graphical form, such as decision trees and networks.
Boltzmann's combinatorial definition of entropy is shown to be of greater
importance for {"probabilistic inference"} than the axiomatic definition used
in information theory.Comment: Invited contribution to the SigmaPhi 2008 Conference; accepted by
EPJB volume 69 issue 3 June 200
Cost of s-fold Decisions in Exact Maxwell-Boltzmann, Bose-Einstein and Fermi-Dirac Statistics
The exact forms of the degenerate Maxwell-Boltzmann (MB), Bose-Einstein (BE)
and Fermi-Dirac (FD) entropy functions, derived by Boltzmann's principle
without the Stirling approximation (Niven, Physics Letters A, 342(4) (2005)
286), are further examined. Firstly, an apparent paradox in quantisation
effects is resolved using the Laplace-Jaynes interpretation of probability. The
energy cost of learning that a system, distributed over s equiprobable states,
is in one such state (an s-fold decision) is then calculated for each
statistic. The analysis confirms that the cost depends on one's knowledge of
the number of entities N and (for BE and FD statistics) the degeneracy,
extending the findings of Niven (2005).Comment: 7 figures; 5 pages; REVTEX / TeXShop; paper from 2005 NEXT-Sigma-Ph
- …