6,566 research outputs found
Magnification Control in Winner Relaxing Neural Gas
An important goal in neural map learning, which can conveniently be
accomplished by magnification control, is to achieve information optimal coding
in the sense of information theory. In the present contribution we consider the
winner relaxing approach for the neural gas network. Originally, winner
relaxing learning is a slight modification of the self-organizing map learning
rule that allows for adjustment of the magnification behavior by an a priori
chosen control parameter. We transfer this approach to the neural gas
algorithm. The magnification exponent can be calculated analytically for
arbitrary dimension from a continuum theory, and the entropy of the resulting
map is studied numerically conf irming the theoretical prediction. The
influence of a diagonal term, which can be added without impacting the
magnification, is studied numerically. This approach to maps of maximal mutual
information is interesting for applications as the winner relaxing term only
adds computational cost of same order and is easy to implement. In particular,
it is not necessary to estimate the generally unknown data probability density
as in other magnification control approaches.Comment: 14pages, 2 figure
Magnification Control in Self-Organizing Maps and Neural Gas
We consider different ways to control the magnification in self-organizing
maps (SOM) and neural gas (NG). Starting from early approaches of magnification
control in vector quantization, we then concentrate on different approaches for
SOM and NG. We show that three structurally similar approaches can be applied
to both algorithms: localized learning, concave-convex learning, and winner
relaxing learning. Thereby, the approach of concave-convex learning in SOM is
extended to a more general description, whereas the concave-convex learning for
NG is new. In general, the control mechanisms generate only slightly different
behavior comparing both neural algorithms. However, we emphasize that the NG
results are valid for any data dimension, whereas in the SOM case the results
hold only for the one-dimensional case.Comment: 24 pages, 4 figure
Trajectories entropy in dynamical graphs with memory
In this paper we investigate the application of non-local graph entropy to
evolving and dynamical graphs. The measure is based upon the notion of Markov
diffusion on a graph, and relies on the entropy applied to trajectories
originating at a specific node. In particular, we study the model of
reinforcement-decay graph dynamics, which leads to scale free graphs. We find
that the node entropy characterizes the structure of the network in the two
parameter phase-space describing the dynamical evolution of the weighted graph.
We then apply an adapted version of the entropy measure to purely memristive
circuits. We provide evidence that meanwhile in the case of DC voltage the
entropy based on the forward probability is enough to characterize the graph
properties, in the case of AC voltage generators one needs to consider both
forward and backward based transition probabilities. We provide also evidence
that the entropy highlights the self-organizing properties of memristive
circuits, which re-organizes itself to satisfy the symmetries of the underlying
graph.Comment: 15 pages one column, 10 figures; new analysis and memristor models
added. Text improve
Probability as a physical motive
Recent theoretical progress in nonequilibrium thermodynamics, linking the
physical principle of Maximum Entropy Production ("MEP") to the
information-theoretical "MaxEnt" principle of scientific inference, together
with conjectures from theoretical physics that there may be no fundamental
causal laws but only probabilities for physical processes, and from
evolutionary theory that biological systems expand "the adjacent possible" as
rapidly as possible, all lend credence to the proposition that probability
should be recognized as a fundamental physical motive. It is further proposed
that spatial order and temporal order are two aspects of the same thing, and
that this is the essence of the second law of thermodynamics.Comment: Replaced at the request of the publisher. Minor corrections to
references and to Equation 1 added
Nature as a Network of Morphological Infocomputational Processes for Cognitive Agents
This paper presents a view of nature as a network of infocomputational agents organized in a dynamical hierarchy of levels. It provides a framework for unification of currently disparate understandings of natural, formal, technical, behavioral and social phenomena based on information as a structure, differences in one system that cause the differences in another system, and computation as its dynamics, i.e. physical process of morphological change in the informational structure. We address some of the frequent misunderstandings regarding the natural/morphological computational models and their relationships to physical systems, especially cognitive systems such as living beings. Natural morphological infocomputation as a conceptual framework necessitates generalization of models of computation beyond the traditional Turing machine model presenting symbol manipulation, and requires agent-based concurrent resource-sensitive models of computation in order to be able to cover the whole range of phenomena from physics to cognition. The central role of agency, particularly material vs. cognitive agency is highlighted
- …