4,589 research outputs found
Revisiting chaos in stimulus-driven spiking networks: signal encoding and discrimination
Highly connected recurrent neural networks often produce chaotic dynamics,
meaning their precise activity is sensitive to small perturbations. What are
the consequences for how such networks encode streams of temporal stimuli? On
the one hand, chaos is a strong source of randomness, suggesting that small
changes in stimuli will be obscured by intrinsically generated variability. On
the other hand, recent work shows that the type of chaos that occurs in spiking
networks can have a surprisingly low-dimensional structure, suggesting that
there may be "room" for fine stimulus features to be precisely resolved. Here
we show that strongly chaotic networks produce patterned spikes that reliably
encode time-dependent stimuli: using a decoder sensitive to spike times on
timescales of 10's of ms, one can easily distinguish responses to very similar
inputs. Moreover, recurrence serves to distribute signals throughout chaotic
networks so that small groups of cells can encode substantial information about
signals arriving elsewhere. A conclusion is that the presence of strong chaos
in recurrent networks does not prohibit precise stimulus encoding.Comment: 8 figure
Dissimilarity-based Ensembles for Multiple Instance Learning
In multiple instance learning, objects are sets (bags) of feature vectors
(instances) rather than individual feature vectors. In this paper we address
the problem of how these bags can best be represented. Two standard approaches
are to use (dis)similarities between bags and prototype bags, or between bags
and prototype instances. The first approach results in a relatively
low-dimensional representation determined by the number of training bags, while
the second approach results in a relatively high-dimensional representation,
determined by the total number of instances in the training set. In this paper
a third, intermediate approach is proposed, which links the two approaches and
combines their strengths. Our classifier is inspired by a random subspace
ensemble, and considers subspaces of the dissimilarity space, defined by
subsets of instances, as prototypes. We provide guidelines for using such an
ensemble, and show state-of-the-art performances on a range of multiple
instance learning problems.Comment: Submitted to IEEE Transactions on Neural Networks and Learning
Systems, Special Issue on Learning in Non-(geo)metric Space
- …