696 research outputs found

    Neural network architecture for crossmodal activation and perceptual sequences

    Get PDF
    Abstract A self-organizing neural network is described that can associate between different modalities and also has the ability to learn perceptual sequences. This architecture is a step towards the development of a complete agent containing simplified versions of all major neural subsystems in a mammal. It aims at exploring as well as takes inspiration from the idea that cognitive function involves an internal simulation of perception and movement. We have tested the architecture in simulations as well as together with real sensors with very encouraging results

    Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization

    Get PDF
    Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenario

    Functional imaging studies of visual-auditory integration in man.

    Get PDF
    This thesis investigates the central nervous system's ability to integrate visual and auditory information from the sensory environment into unified conscious perception. It develops the possibility that the principle of functional specialisation may be applicable in the multisensory domain. The first aim was to establish the neuroanatomical location at which visual and auditory stimuli are integrated in sensory perception. The second was to investigate the neural correlates of visual-auditory synchronicity, which would be expected to play a vital role in establishing which visual and auditory stimuli should be perceptually integrated. Four functional Magnetic Resonance Imaging studies identified brain areas specialised for: the integration of dynamic visual and auditory cues derived from the same everyday environmental events (Experiment 1), discriminating relative synchronicity between dynamic, cyclic, abstract visual and auditory stimuli (Experiment 2 & 3) and the aesthetic evaluation of visually and acoustically perceived art (Experiment 4). Experiment 1 provided evidence to suggest that the posterior temporo-parietal junction may be an important site of crossmodal integration. Experiment 2 revealed for the first time significant activation of the right anterior frontal operculum (aFO) when visual and auditory stimuli cycled asynchronously. Experiment 3 confirmed and developed this observation as the right aFO was activated only during crossmodal (visual-auditory), but not intramodal (visual-visual, auditory-auditory) asynchrony. Experiment 3 also demonstrated activation of the amygdala bilaterally during crossmodal synchrony. Experiment 4 revealed the neural correlates of supramodal, contemplative, aesthetic evaluation within the medial fronto-polar cortex. Activity at this locus varied parametrically according to the degree of subjective aesthetic beauty, for both visual art and musical extracts. The most robust finding of this thesis is that activity in the right aFO increases when concurrently perceived visual and auditory sensory stimuli deviate from crossmodal synchrony, which may veto the crossmodal integration of unrelated stimuli into unified conscious perception

    Neural oscillatory signatures of auditory and audiovisual illusions

    Get PDF
    Questions of the relationship between human perception and brain activity can be approached from different perspectives: in the first, the brain is mainly regarded as a recipient and processor of sensory data. The corresponding research objective is to establish mappings of neural activity patterns and external stimuli. Alternatively, the brain can be regarded as a self-organized dynamical system, whose constantly changing state affects how incoming sensory signals are processed and perceived. The research reported in this thesis can chiefly be located in the second framework, and investigates the relationship between oscillatory brain activity and the perception of ambiguous stimuli. Oscillations are here considered as a mechanism for the formation of transient neural assemblies, which allows efficient information transfer. While the relevance of activity in distinct frequency bands for auditory and audiovisual perception is well established, different functional architectures of sensory integration can be derived from the literature. This dissertation therefore aims to further clarify the role of oscillatory activity in the integration of sensory signals towards unified perceptual objects, using illusion paradigms as tools of study. In study 1, we investigate the role of low frequency power modulations and phase alignment in auditory object formation. We provide evidence that auditory restoration is associated with a power reduction, while the registration of an additional object is reflected by an increase in phase locking. In study 2, we analyze oscillatory power as a predictor of auditory influence on visual perception in the sound-induced flash illusion. We find that increased beta-/ gamma-band power over occipitotemporal electrodes shortly before stimulus onset predicts the illusion, suggesting a facilitation of processing in polymodal circuits. In study 3, we address the question of whether visual influence on auditory perception in the ventriloquist illusion is reflected in primary sensory or higher-order areas. We establish an association between reduced theta-band power in mediofrontal areas and the occurrence of illusion, which indicates a top-down influence on sensory decision-making. These findings broaden our understanding of the functional relevance of neural oscillations by showing that different processing modes, which are reflected in specific spatiotemporal activity patterns, operate in different instances of sensory integration.Fragen nach dem Zusammenhang zwischen menschlicher Wahrnehmung und Hirnaktivität können aus verschiedenen Perspektiven adressiert werden: in der einen wird das Gehirn hauptsächlich als Empfänger und Verarbeiter von sensorischen Daten angesehen. Das entsprechende Forschungsziel wäre eine Zuordnung von neuronalen Aktivitätsmustern zu externen Reizen. Dieser Sichtweise gegenüber steht ein Ansatz, der das Gehirn als selbstorganisiertes dynamisches System begreift, dessen sich ständig verändernder Zustand die Verarbeitung und Wahrnehmung von sensorischen Signalen beeinflusst. Die Arbeiten, die in dieser Dissertation zusammengefasst sind, können vor allem in der zweitgenannten Forschungsrichtung verortet werden, und untersuchen den Zusammenhang zwischen oszillatorischer Hirnaktivität und der Wahrnehmung von mehrdeutigen Stimuli. Oszillationen werden hier als ein Mechanismus für die Formation von transienten neuronalen Zusammenschlüssen angesehen, der effizienten Informationstransfer ermöglicht. Obwohl die Relevanz von Aktivität in verschiedenen Frequenzbändern für auditorische und audiovisuelle Wahrnehmung gut belegt ist, können verschiedene funktionelle Architekturen der sensorischen Integration aus der Literatur abgeleitet werden. Das Ziel dieser Dissertation ist deshalb eine Präzisierung der Rolle oszillatorischer Aktivität bei der Integration von sensorischen Signalen zu einheitlichen Wahrnehmungsobjekten mittels der Nutzung von Illusionsparadigmen. In der ersten Studie untersuchen wir die Rolle von Leistung und Phasenanpassung in niedrigen Frequenzbändern bei der Formation von auditorischen Objekten. Wir zeigen, dass die Wiederherstellung von Tönen mit einer Reduktion der Leistung zusammenhängt, während die Registrierung eines zusätzlichen Objekts durch einen erhöhten Phasenangleich widergespiegelt wird. In der zweiten Studie analysieren wir oszillatorische Leistung als Prädiktor von auditorischem Einfluss auf visuelle Wahrnehmung in der sound-induced flash illusion. Wir stellen fest, dass erhöhte Beta-/Gamma-Band Leistung über occipitotemporalen Elektroden kurz vor der Reizdarbietung das Auftreten der Illusion vorhersagt, was auf eine Begünstigung der Verarbeitung in polymodalen Arealen hinweist. In der dritten Studie widmen wir uns der Frage, ob ein visueller Einfluss auf auditorische Wahrnehmung in der ventriloquist illusion sich in primären sensorischen oder übergeordneten Arealen widerspiegelt. Wir weisen einen Zusammenhang von reduzierter Theta-Band Leistung in mediofrontalen Arealen und dem Auftreten der Illusion nach, was einen top-down Einfluss auf sensorische Entscheidungsprozesse anzeigt. Diese Befunde erweitern unser Verständnis der funktionellen Bedeutung neuronaler Oszillationen, indem sie aufzeigen, dass verschiedene Verarbeitungsmodi, die sich in spezifischen räumlich-zeitlichen Aktivitätsmustern spiegeln, in verschiedenen Phänomenen von sensorischer Integration wirksam sind

    Continual Lifelong Learning with Neural Networks: A Review

    Full text link
    Humans and animals have the ability to continually acquire, fine-tune, and transfer knowledge and skills throughout their lifespan. This ability, referred to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms that together contribute to the development and specialization of our sensorimotor skills as well as to long-term memory consolidation and retrieval. Consequently, lifelong learning capabilities are crucial for autonomous agents interacting in the real world and processing continuous streams of information. However, lifelong learning remains a long-standing challenge for machine learning and neural network models since the continual acquisition of incrementally available information from non-stationary data distributions generally leads to catastrophic forgetting or interference. This limitation represents a major drawback for state-of-the-art deep neural network models that typically learn representations from stationary batches of training data, thus without accounting for situations in which information becomes incrementally available over time. In this review, we critically summarize the main challenges linked to lifelong learning for artificial learning systems and compare existing neural network approaches that alleviate, to different extents, catastrophic forgetting. We discuss well-established and emerging research motivated by lifelong learning factors in biological systems such as structural plasticity, memory replay, curriculum and transfer learning, intrinsic motivation, and multisensory integration

    Neuroplasticity, neural reuse, and the language module

    Get PDF
    What conception of mental architecture can survive the evidence of neuroplasticity and neural reuse in the human brain? In particular, what sorts of modules are compatible with this evidence? I aim to show how developmental and adult neuroplasticity, as well as evidence of pervasive neural reuse, forces us to revise the standard conception of modularity and spells the end of a hardwired and dedicated language module. I argue from principles of both neural reuse and neural redundancy that language is facilitated by a composite of modules (or module-like entities), few if any of which are likely to be linguistically special, and that neuroplasticity provides evidence that (in key respects and to an appreciable extent) few if any of them ought to be considered developmentally robust, though their development does seem to be constrained by features intrinsic to particular regions of cortex (manifesting as domain-specific predispositions or acquisition biases). In the course of doing so I articulate a schematically and neurobiologically precise framework for understanding modules and their supramodular interactions

    Central role of somatosensory processes in sexual arousal as identified by neuroimaging techniques

    Get PDF
    Research on the neural correlates of sexual arousal is a growing field of research in affective neuroscience. A new approach studying the correlation between the hemodynamic cerebral response and autonomic genital response has enabled distinct brain areas to be identified according to their role in inducing penile erection, on the one hand, and in representing penile sensation, on the othe

    Multisensory Congruency as a Mechanism for Attentional Control over Perceptual Selection

    Get PDF
    The neural mechanisms underlying attentional selection of competing neural signals for awareness remains an unresolved issue. We studied attentional selection, using perceptually ambiguous stimuli in a novel multisensory paradigm that combined competing auditory and competing visual stimuli. We demonstrate that the ability to select, and attentively hold, one of the competing alternatives in either sensory modality is greatly enhanced when there is a matching cross-modal stimulus. Intriguingly, this multimodal enhancement of attentional selection seems to require a conscious act of attention, as passively experiencing the multisensory stimuli did not enhance control over the stimulus. We also demonstrate that congruent auditory or tactile information, and combined auditory–tactile information, aids attentional control over competing visual stimuli and visa versa. Our data suggest a functional role for recently found neurons that combine voluntarily initiated attentional functions across sensory modalities. We argue that these units provide a mechanism for structuring multisensory inputs that are then used to selectively modulate early (unimodal) cortical processing, boosting the gain of task-relevant features for willful control over perceptual awareness
    • …
    corecore