46 research outputs found
Structure or Noise?
We show how rate-distortion theory provides a mechanism for automated theory
building by naturally distinguishing between regularity and randomness. We
start from the simple principle that model variables should, as much as
possible, render the future and past conditionally independent. From this, we
construct an objective function for model making whose extrema embody the
trade-off between a model's structural complexity and its predictive power. The
solutions correspond to a hierarchy of models that, at each level of
complexity, achieve optimal predictive power at minimal cost. In the limit of
maximal prediction the resulting optimal model identifies a process's intrinsic
organization by extracting the underlying causal states. In this limit, the
model's complexity is given by the statistical complexity, which is known to be
minimal for achieving maximum prediction. Examples show how theory building can
profit from analyzing a process's causal compressibility, which is reflected in
the optimal models' rate-distortion curve--the process's characteristic for
optimally balancing structure and noise at different levels of representation.Comment: 6 pages, 2 figures;
http://cse.ucdavis.edu/~cmg/compmech/pubs/son.htm
The physical observer in a Szilard engine with uncertainty
Information engines model ``Maxwell's demon" mechanistically. However, the
demon's strategy is pre-described by an external experimenter, and information
engines are conveniently designed such that observables contain complete
information about variables pertinent to work extraction. In real world
scenarios, it is more realistic to encounter partial observability, which
forces the physical observer, a necessary part of the information engine, to
make inferences from incomplete knowledge. Here, we use the fact that an
algorithm for computing optimal strategies can be directly derived from
maximizing overall engine work output. For a stylizedly simple decision
problem, we discover interesting optimal strategies that differ notably from
naive coarse graining. They inspire simple, yet compelling, parameterized soft
coarse grainings, as a model class of near-perfect approximations
Thermodynamically rational decision making under uncertainty
Inference principles are postulated within statistics, they are not usually
derived from any underlying physical constraints on real world observers. An
exception to this rule is that in the context of partially observable
information engines decision making can be based solely on physical arguments.
An inference principle can be derived from minimization of the lower bound on
average dissipation [Phys. Rev. Lett., 124(5), 050601], which is achievable
with a quasi-static process. Thermodynamically rational decision strategies can
be computed algorithmically with the resulting approach. Here, we use this to
study an example of binary decision making under uncertainty that is very
simple, yet just interesting enough to be non-trivial: observations are either
entirely uninformative, or they carry complete certainty about the variable
that needs to be known for successful energy harvesting. Solutions found
algorithmically can be expressed in terms of parameterized soft partitions of
the observable space. This allows for their interpretation, as well as for the
analytical calculation of all quantities that characterize the decision problem
and the thermodynamically rational strategies.Comment: 7 pages, 3 figure