1,917 research outputs found

    Shared inputs, entrainment, and desynchrony in elliptic bursters: from slow passage to discontinuous circle maps

    Full text link
    What input signals will lead to synchrony vs. desynchrony in a group of biological oscillators? This question connects with both classical dynamical systems analyses of entrainment and phase locking and with emerging studies of stimulation patterns for controlling neural network activity. Here, we focus on the response of a population of uncoupled, elliptically bursting neurons to a common pulsatile input. We extend a phase reduction from the literature to capture inputs of varied strength, leading to a circle map with discontinuities of various orders. In a combined analytical and numerical approach, we apply our results to both a normal form model for elliptic bursting and to a biophysically-based neuron model from the basal ganglia. We find that, depending on the period and amplitude of inputs, the response can either appear chaotic (with provably positive Lyaponov exponent for the associated circle maps), or periodic with a broad range of phase-locked periods. Throughout, we discuss the critical underlying mechanisms, including slow-passage effects through Hopf bifurcation, the role and origin of discontinuities, and the impact of noiseComment: 17 figures, 40 page

    Complex-Valued Autoencoders for Object Discovery

    Get PDF
    Object-centric representations form the basis of human perception and enable us to reason about the world and to systematically generalize to new settings. Currently, most machine learning work on unsupervised object discovery focuses on slot-based approaches, which explicitly separate the latent representations of individual objects. While the result is easily interpretable, it usually requires the design of involved architectures. In contrast to this, we propose a distributed approach to object-centric representations: the Complex AutoEncoder. Following a coding scheme theorized to underlie object representations in biological neurons, its complex-valued activations represent two messages: their magnitudes express the presence of a feature, while the relative phase differences between neurons express which features should be bound together to create joint object representations. We show that this simple and efficient approach achieves better reconstruction performance than an equivalent real-valued autoencoder on simple multi-object datasets. Additionally, we show that it achieves competitive unsupervised object discovery performance to a SlotAttention model on two datasets, and manages to disentangle objects in a third dataset where SlotAttention fails - all while being 7-70 times faster to train

    Phase synchrony facilitates binding and segmentation of natural images in a coupled neural oscillator network

    Get PDF
    Synchronization has been suggested as a mechanism of binding distributed feature representations facilitating segmentation of visual stimuli. Here we investigate this concept based on unsupervised learning using natural visual stimuli. We simulate dual-variable neural oscillators with separate activation and phase variables. The binding of a set of neurons is coded by synchronized phase variables. The network of tangential synchronizing connections learned from the induced activations exhibits small-world properties and allows binding even over larger distances. We evaluate the resulting dynamic phase maps using segmentation masks labeled by human experts. Our simulation results show a continuously increasing phase synchrony between neurons within the labeled segmentation masks. The evaluation of the network dynamics shows that the synchrony between network nodes establishes a relational coding of the natural image inputs. This demonstrates that the concept of binding by synchrony is applicable in the context of unsupervised learning using natural visual stimuli

    Deep Complex Networks

    Full text link
    At present, the vast majority of building blocks, techniques, and architectures for deep learning are based on real-valued operations and representations. However, recent work on recurrent neural networks and older fundamental theoretical analysis suggests that complex numbers could have a richer representational capacity and could also facilitate noise-robust memory retrieval mechanisms. Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models. In this work, we provide the key atomic components for complex-valued deep neural networks and apply them to convolutional feed-forward networks and convolutional LSTMs. More precisely, we rely on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and we use them in experiments with end-to-end training schemes. We demonstrate that such complex-valued models are competitive with their real-valued counterparts. We test deep complex models on several computer vision tasks, on music transcription using the MusicNet dataset and on Speech Spectrum Prediction using the TIMIT dataset. We achieve state-of-the-art performance on these audio-related tasks

    Contrastive Training of Complex-Valued Autoencoders for Object Discovery

    Full text link
    Current state-of-the-art object-centric models use slots and attention-based routing for binding. However, this class of models has several conceptual limitations: the number of slots is hardwired; all slots have equal capacity; training has high computational cost; there are no object-level relational factors within slots. Synchrony-based models in principle can address these limitations by using complex-valued activations which store binding information in their phase components. However, working examples of such synchrony-based models have been developed only very recently, and are still limited to toy grayscale datasets and simultaneous storage of less than three objects in practice. Here we introduce architectural modifications and a novel contrastive learning method that greatly improve the state-of-the-art synchrony-based model. For the first time, we obtain a class of synchrony-based models capable of discovering objects in an unsupervised manner in multi-object color datasets and simultaneously representing more than three objectsComment: 26 pages, 14 figure

    Functional Roles of Alpha-Band Phase Synchronization in Local and Large-Scale Cortical Networks

    Get PDF
    Alpha-frequency band (8–14 Hz) oscillations are among the most salient phenomena in human electroencephalography (EEG) recordings and yet their functional roles have remained unclear. Much of research on alpha oscillations in human EEG has focused on peri-stimulus amplitude dynamics, which phenomenologically support an idea of alpha oscillations being negatively correlated with local cortical excitability and having a role in the suppression of task-irrelevant neuronal processing. This kind of an inhibitory role for alpha oscillations is also supported by several functional magnetic resonance imaging and trans-cranial magnetic stimulation studies. Nevertheless, investigations of local and inter-areal alpha phase dynamics suggest that the alpha-frequency band rhythmicity may play a role also in active task-relevant neuronal processing. These data imply that inter-areal alpha phase synchronization could support attentional, executive, and contextual functions. In this review, we outline evidence supporting different views on the roles of alpha oscillations in cortical networks and unresolved issues that should be addressed to resolve or reconcile these apparently contrasting hypotheses
    corecore