413 research outputs found
Maxallent: Maximizers of all Entropies and Uncertainty of Uncertainty
The entropy maximum approach (Maxent) was developed as a minimization of the
subjective uncertainty measured by the Boltzmann--Gibbs--Shannon entropy. Many
new entropies have been invented in the second half of the 20th century. Now
there exists a rich choice of entropies for fitting needs. This diversity of
entropies gave rise to a Maxent "anarchism". Maxent approach is now the
conditional maximization of an appropriate entropy for the evaluation of the
probability distribution when our information is partial and incomplete. The
rich choice of non-classical entropies causes a new problem: which entropy is
better for a given class of applications? We understand entropy as a measure of
uncertainty which increases in Markov processes. In this work, we describe the
most general ordering of the distribution space, with respect to which all
continuous-time Markov processes are monotonic (the Markov order). For
inference, this approach results in a set of conditionally "most random"
distributions. Each distribution from this set is a maximizer of its own
entropy. This "uncertainty of uncertainty" is unavoidable in analysis of
non-equilibrium systems. Surprisingly, the constructive description of this set
of maximizers is possible. Two decomposition theorems for Markov processes
provide a tool for this description.Comment: 23 pages, 4 figures, Correction in Conclusion (postprint
Selection theorem for systems with inheritance
The problem of finite-dimensional asymptotics of infinite-dimensional dynamic
systems is studied. A non-linear kinetic system with conservation of supports
for distributions has generically finite-dimensional asymptotics. Such systems
are apparent in many areas of biology, physics (the theory of parametric wave
interaction), chemistry and economics. This conservation of support has a
biological interpretation: inheritance. The finite-dimensional asymptotics
demonstrates effects of "natural" selection. Estimations of the asymptotic
dimension are presented. After some initial time, solution of a kinetic
equation with conservation of support becomes a finite set of narrow peaks that
become increasingly narrow over time and move increasingly slowly. It is
possible that these peaks do not tend to fixed positions, and the path covered
tends to infinity as t goes to infinity. The drift equations for peak motion
are obtained. Various types of distribution stability are studied: internal
stability (stability with respect to perturbations that do not extend the
support), external stability or uninvadability (stability with respect to
strongly small perturbations that extend the support), and stable realizability
(stability with respect to small shifts and extensions of the density peaks).
Models of self-synchronization of cell division are studied, as an example of
selection in systems with additional symmetry. Appropriate construction of the
notion of typicalness in infinite-dimensional space is discussed, and the
notion of "completely thin" sets is introduced.
Key words: Dynamics; Attractor; Evolution; Entropy; Natural selectionComment: 46 pages, the final journal versio
Basic Types of Coarse-Graining
We consider two basic types of coarse-graining: the Ehrenfests'
coarse-graining and its extension to a general principle of non-equilibrium
thermodynamics, and the coarse-graining based on uncertainty of dynamical
models and Epsilon-motions (orbits). Non-technical discussion of basic notions
and main coarse-graining theorems are presented: the theorem about entropy
overproduction for the Ehrenfests' coarse-graining and its generalizations,
both for conservative and for dissipative systems, and the theorems about
stable properties and the Smale order for Epsilon-motions of general dynamical
systems including structurally unstable systems. Computational kinetic models
of macroscopic dynamics are considered. We construct a theoretical basis for
these kinetic models using generalizations of the Ehrenfests' coarse-graining.
General theory of reversible regularization and filtering semigroups in
kinetics is presented, both for linear and non-linear filters. We obtain
explicit expressions and entropic stability conditions for filtered equations.
A brief discussion of coarse-graining by rounding and by small noise is also
presented.Comment: 60 pgs, 11 figs., includes new analysis of coarse-graining by
filtering. A talk given at the research workshop: "Model Reduction and
Coarse-Graining Approaches for Multiscale Phenomena," University of
Leicester, UK, August 24-26, 200
Detailed balance in micro- and macrokinetics and micro-distinguishability of macro-processes
We develop a general framework for the discussion of detailed balance and
analyse its microscopic background. We find that there should be two additions
to the well-known - or -invariance of the microscopic laws of motion:
1. Equilibrium should not spontaneously break the relevant - or
-symmetry.
2. The macroscopic processes should be microscopically distinguishable to
guarantee persistence of detailed balance in the model reduction from micro- to
macrokinetics.
We briefly discuss examples of the violation of these rules and the
corresponding violation of detailed balance.Comment: 7 pages, extended version with new sections: "Reciprocal relation and
detailed balance" and "Relations between elementary processes beyond
microreversibility and detailed balance.
Principal manifolds and graphs in practice: from molecular biology to dynamical systems
We present several applications of non-linear data modeling, using principal
manifolds and principal graphs constructed using the metaphor of elasticity
(elastic principal graph approach). These approaches are generalizations of the
Kohonen's self-organizing maps, a class of artificial neural networks. On
several examples we show advantages of using non-linear objects for data
approximation in comparison to the linear ones. We propose four numerical
criteria for comparing linear and non-linear mappings of datasets into the
spaces of lower dimension. The examples are taken from comparative political
science, from analysis of high-throughput data in molecular biology, from
analysis of dynamical systems.Comment: 12 pages, 9 figure
Multiscale principal component analysis
Principal component analysis (PCA) is an important tool in exploring data.
The conventional approach to PCA leads to a solution which favours the
structures with large variances. This is sensitive to outliers and could
obfuscate interesting underlying structures. One of the equivalent definitions
of PCA is that it seeks the subspaces that maximize the sum of squared pairwise
distances between data projections. This definition opens up more flexibility
in the analysis of principal components which is useful in enhancing PCA. In
this paper we introduce scales into PCA by maximizing only the sum of pairwise
distances between projections for pairs of datapoints with distances within a
chosen interval of values [l,u]. The resulting principal component
decompositions in Multiscale PCA depend on point (l,u) on the plane and for
each point we define projectors onto principal components. Cluster analysis of
these projectors reveals the structures in the data at various scales. Each
structure is described by the eigenvectors at the medoid point of the cluster
which represent the structure. We also use the distortion of projections as a
criterion for choosing an appropriate scale especially for data with outliers.
This method was tested on both artificial distribution of data and real data.
For data with multiscale structures, the method was able to reveal the
different structures of the data and also to reduce the effect of outliers in
the principal component analysis.Comment: 24 pages, 22 figure
- …