4,673 research outputs found
Topological Schemas of Memory Spaces
Hippocampal cognitive map---a neuronal representation of the spatial
environment---is broadly discussed in the computational neuroscience literature
for decades. More recent studies point out that hippocampus plays a major role
in producing yet another cognitive framework that incorporates not only
spatial, but also nonspatial memories---the memory space. However, unlike
cognitive maps, memory spaces have been barely studied from a theoretical
perspective. Here we propose an approach for modeling hippocampal memory spaces
as an epiphenomenon of neuronal spiking activity. First, we suggest that the
memory space may be viewed as a finite topological space---a hypothesis that
allows treating both spatial and nonspatial aspects of hippocampal function on
equal footing. We then model the topological properties of the memory space to
demonstrate that this concept naturally incorporates the notion of a cognitive
map. Lastly, we suggest a formal description of the memory consolidation
process and point out a connection between the proposed model of the memory
spaces to the so-called Morris' schemas, which emerge as the most compact
representation of the memory structure.Comment: 24 pages, 8 Figures, 1 Suppl. Figur
Creativity as Cognitive design \ud The case of mesoscopic variables in Meta-Structures\ud
Creativity is an open problem which has been differently approached by several disciplines since a long time. In this contribution we consider as creative the constructivist design an observer does on the description levels of complex phenomena, such as the self-organized and emergent ones ( e.g., Bènard rollers, Belousov-Zhabotinsky reactions, flocks, swarms, and more radical cognitive and social emergences). We consider this design as related to the Gestaltian creation of a language fit for representing natural processes and the observer in an integrated way. Organised systems, both artificial and most of the natural ones are designed/ modelled according to a logical closed model which masters all the inter-relation between their constitutive elements, and which can be described by an algorithm or a single formal model. We will show there that logical openness and DYSAM (Dynamical Usage of Models) are the proper tools for those phenomena which cannot be described by algorithms or by a single formal model. The strong correlation between emergence and creativity suggests that an open model is the best way to provide a formal definition of creativity. A specific application relates to the possibility to shape the emergence of Collective Behaviours. Different modelling approaches have been introduced, based on symbolic as well as sub-symbolic rules of interaction to simulate collective phenomena by means of computational emergence. Another approach is based on modelling collective phenomena as sequences of Multiple Systems established by percentages of conceptually interchangeable agents taking on the same roles at different times and different roles at the same time. In the Meta-Structures project we propose to use mesoscopic variables as creative design, invention, good continuity and imitation of the description level. In the project we propose to define the coherence of sequences of Multiple Systems by using the values taken on by the dynamic mesoscopic clusters of its constitutive elements, such as the instantaneous number of elements having, in a flock, the same speed, distance from their nearest neighbours, direction and altitude. In Meta-Structures the collective behaviour’s coherence corresponds, for instance, to the scalar values taken by speed, distance, direction and altitude along time, through statistical strategies of interpolation, quasi-periodicity, levels of ergodicity and their reciprocal relationship. In this case the constructivist role of the observer is considered creative as it relates to neither non-linear replication nor transposition of levels of description and models used for artificial systems, like reductionism. Creativity rather lies in inventing new mesoscopic variables able to identify coherent patterns in complex systems. As it is known, mesoscopic variables represent partial macroscopic properties of a system by using some of the microscopic degrees of freedom possessed by composing elements. Such partial usage of microscopic as well as macroscopic properties allows a kind of Gestaltian continuity and imitation between levels of descriptions for mesoscopic modelling. \ud
\u
Self-Organization of Spiking Neural Networks for Visual Object Recognition
On one hand, the visual system has the ability to differentiate between very similar
objects. On the other hand, we can also recognize the same object in images that vary
drastically, due to different viewing angle, distance, or illumination. The ability to
recognize the same object under different viewing conditions is called invariant object
recognition. Such object recognition capabilities are not immediately available after
birth, but are acquired through learning by experience in the visual world.
In many viewing situations different views of the same object are seen in a tem-
poral sequence, e.g. when we are moving an object in our hands while watching it.
This creates temporal correlations between successive retinal projections that can be
used to associate different views of the same object. Theorists have therefore pro-
posed a synaptic plasticity rule with a built-in memory trace (trace rule).
In this dissertation I present spiking neural network models that offer possible
explanations for learning of invariant object representations. These models are based
on the following hypotheses:
1. Instead of a synaptic trace rule, persistent firing of recurrently connected groups
of neurons can serve as a memory trace for invariance learning.
2. Short-range excitatory lateral connections enable learning of self-organizing
topographic maps that represent temporal as well as spatial correlations.
3. When trained with sequences of object views, such a network can learn repre-
sentations that enable invariant object recognition by clustering different views
of the same object within a local neighborhood.
4. Learning of representations for very similar stimuli can be enabled by adaptive
inhibitory feedback connections.
The study presented in chapter 3.1 details an implementation of a spiking neural
network to test the first three hypotheses. This network was tested with stimulus
sets that were designed in two feature dimensions to separate the impact of tempo-
ral and spatial correlations on learned topographic maps. The emerging topographic
maps showed patterns that were dependent on the temporal order of object views
during training. Our results show that pooling over local neighborhoods of the to-
pographic map enables invariant recognition.
Chapter 3.2 focuses on the fourth hypothesis. There we examine how the adaptive
feedback inhibition (AFI) can improve the ability of a network to discriminate between
very similar patterns. The results show that with AFI learning is faster, and the
network learns selective representations for stimuli with higher levels of overlap
than without AFI.
Results of chapter 3.1 suggest a functional role for topographic object representa-
tions that are known to exist in the inferotemporal cortex, and suggests a mechanism
for the development of such representations. The AFI model implements one aspect
of predictive coding: subtraction of a prediction from the actual input of a system. The
successful implementation in a biologically plausible network of spiking neurons
shows that predictive coding can play a role in cortical circuits
Shift-Invariant Kernel Additive Modelling for Audio Source Separation
A major goal in blind source separation to identify and separate sources is
to model their inherent characteristics. While most state-of-the-art approaches
are supervised methods trained on large datasets, interest in non-data-driven
approaches such as Kernel Additive Modelling (KAM) remains high due to their
interpretability and adaptability. KAM performs the separation of a given
source applying robust statistics on the time-frequency bins selected by a
source-specific kernel function, commonly the K-NN function. This choice
assumes that the source of interest repeats in both time and frequency. In
practice, this assumption does not always hold. Therefore, we introduce a
shift-invariant kernel function capable of identifying similar spectral content
even under frequency shifts. This way, we can considerably increase the amount
of suitable sound material available to the robust statistics. While this leads
to an increase in separation performance, a basic formulation, however, is
computationally expensive. Therefore, we additionally present acceleration
techniques that lower the overall computational complexity.Comment: Feedback is welcom
Hierarchical Feature Learning
The success of many tasks depends on good feature representation which is often domain-specific and hand-crafted requiring substantial human effort. Such feature representation is not general, i.e. unsuitable for even the same task across multiple domains, let alone different tasks.To address these issues, a multilayered convergent neural architecture is presented for learning from repeating spatially and temporally coincident patterns in data at multiple levels of abstraction. The bottom-up weights in each layer are learned to encode a hierarchy of overcomplete and sparse feature dictionaries from space- and time-varying sensory data. Two algorithms are investigated: recursive layer-by-layer spherical clustering and sparse coding to learn feature hierarchies. The model scales to full-sized high-dimensional input data and to an arbitrary number of layers thereby having the capability to capture features at any level of abstraction. The model learns features that correspond to objects in higher layers and object-parts in lower layers.Learning features invariant to arbitrary transformations in the data is a requirement for any effective and efficient representation system, biological or artificial. Each layer in the proposed network is composed of simple and complex sublayers motivated by the layered organization of the primary visual cortex. When exposed to natural videos, the model develops simple and complex cell-like receptive field properties. The model can predict by learning lateral connections among the simple sublayer neurons. A topographic map to their spatial features emerges by minimizing the wiring length simultaneously with feature learning.The model is general-purpose, unsupervised and online. Operations in each layer of the model can be implemented in parallelized hardware, making it very efficient for real world applications
- …