446 research outputs found
Yet another resolution of the Gibbs paradox: an information theory approach
The ``Gibbs Paradox'' refers to several related questions concerning entropy
in thermodynamics and statistical mechanics: whether it is an extensive
quantity or not, how it changes when identical particles are mixed, and the
proper way to count states in systems of identical particles. Several authors
have recognized that the paradox is resolved once it is realized that there is
no such thing as the entropy of a system, that there are many entropies, and
that the choice between treating particles as being distinguishable or not
depends on the resolution of the experiment. The purpose of this note is
essentially pedagogical; we add to their analysis by examining the paradox from
the point of view of information theory. Our argument is based on that
`grouping' property of entropy that Shannon recognized, by including it among
his axioms, as an essential requirement on any measure of information. Not only
does it provide the right connection between different entropies but, in
addition, it draws our attention to the obvious fact that addressing issues of
distinguishability and of counting states requires a clear idea about what
precisely do we mean by a state.Comment: Presented at MaxEnt 2001, the 21th International Workshop on Bayesian
Inference and Maximum Entropy Methods (August 4-9, 2001, Baltimore, MD, USA
Maximum entropy approach to the theory of simple fluids
We explore the use of the method of Maximum Entropy (ME) as a technique to
generate approximations. In a first use of the ME method the "exact" canonical
probability distribution of a fluid is approximated by that of a fluid of hard
spheres; ME is used to select an optimal value of the hard-sphere diameter.
These results coincide with the results obtained using the Bogoliuvob
variational method. A second more complete use of the ME method leads to a
better descritption of the soft-core nature of the interatomic potential in
terms of a statistical mixture of distributions corresponding to hard spheres
of different diameters. As an example, the radial distribution function for a
Lennard-Jones fluid (Argon) is compared with results from molecular dynamics
simulations. There is a considerable improvement over the results obtained from
the Bogoliuvob principle.Comment: 14 pages and 4 figures. Presented at MaxEnt 2003, the 23rd
International Workshop on Bayesian Inference and Maximum Entropy Methods
(August 3-8, 2003, Jackson Hole, WY, USA
Identifying Biomagnetic Sources in the Brain by the Maximum Entropy Approach
Magnetoencephalographic (MEG) measurements record magnetic fields generated
from neurons while information is being processed in the brain. The inverse
problem of identifying sources of biomagnetic fields and deducing their
intensities from MEG measurements is ill-posed when the number of field
detectors is far less than the number of sources. This problem is less severe
if there is already a reasonable prior knowledge in the form of a distribution
in the intensity of source activation. In this case the problem of identifying
and deducing source intensities may be transformed to one of using the MEG data
to update a prior distribution to a posterior distribution. Here we report on
some work done using the maximum entropy method (ME) as an updating tool.
Specifically, we propose an implementation of the ME method in cases when the
prior contain almost no knowledge of source activation. Two examples are
studied, in which part of motor cortex is activated with uniform and varying
intensities, respectively.Comment: 8 pages, 8 figures. Presented at 25th International Workshop on
Bayesian Inference and Maximum Entropy Methods in Science and Engineering,
San Jose, CA, USA Aug 7-12, 200
- …