104 research outputs found

    Dreaming neural networks: rigorous results

    Full text link
    Recently a daily routine for associative neural networks has been proposed: the network Hebbian-learns during the awake state (thus behaving as a standard Hopfield model), then, during its sleep state, optimizing information storage, it consolidates pure patterns and removes spurious ones: this forces the synaptic matrix to collapse to the projector one (ultimately approaching the Kanter-Sompolinksy model). This procedure keeps the learning Hebbian-based (a biological must) but, by taking advantage of a (properly stylized) sleep phase, still reaches the maximal critical capacity (for symmetric interactions). So far this emerging picture (as well as the bulk of papers on unlearning techniques) was supported solely by mathematically-challenging routes, e.g. mainly replica-trick analysis and numerical simulations: here we rely extensively on Guerra's interpolation techniques developed for neural networks and, in particular, we extend the generalized stochastic stability approach to the case. Confining our description within the replica symmetric approximation (where the previous ones lie), the picture painted regarding this generalization (and the previously existing variations on theme) is here entirely confirmed. Further, still relying on Guerra's schemes, we develop a systematic fluctuation analysis to check where ergodicity is broken (an analysis entirely absent in previous investigations). We find that, as long as the network is awake, ergodicity is bounded by the Amit-Gutfreund-Sompolinsky critical line (as it should), but, as the network sleeps, sleeping destroys spin glass states by extending both the retrieval as well as the ergodic region: after an entire sleeping session the solely surviving regions are retrieval and ergodic ones and this allows the network to achieve the perfect retrieval regime (the number of storable patterns equals the number of neurons in the network)

    Interacting Dreaming Neural Networks

    Full text link
    We study the interaction of agents, where each one consists of an associative memory neural network trained with the same memory patterns and possibly different reinforcement-unlearning dreaming periods. Using replica methods, we obtain the rich equilibrium phase diagram of the coupled agents. It shows phases such as the student-professor phase, where only one network benefits from the interaction while the other is unaffected; a mutualism phase, where both benefit; an indifferent phase and an insufficient phase, where neither are benefited nor impaired; a phase of amensalism where one is unchanged and the other is damaged. In addition to the paramagnetic and spin glass phases, there is also one we call the reinforced delusion phase, where agents concur without having finite overlaps with memory patterns. For zero coupling constant, the model becomes the reinforcement and removal dreaming model, which without dreaming is the Hopfield model. For finite coupling and a single memory pattern, it becomes a Mattis version of the Ashkin-Teller model. In addition to the analytical results, we have explored the model with Monte Carlo simulations.Comment: 23 pages , 4 figure

    Training neural networks with structured noise improves classification and generalization

    Full text link
    The beneficial role of noise in learning is nowadays a consolidated concept in the field of artificial neural networks, suggesting that even biological systems might take advantage of similar mechanisms to maximize their performance. The training-with-noise algorithm proposed by Gardner and collaborators is an emblematic example of a noise injection procedure in recurrent networks, which are usually employed to model real neural systems. We show how adding structure into noisy training data can substantially improve the algorithm performance, allowing to approach perfect classification and maximal basins of attraction. We also prove that the so-called Hebbian unlearning rule coincides with the training-with-noise algorithm when noise is maximal and data are fixed points of the network dynamics. A sampling scheme for optimal noisy data is eventually proposed and implemented to outperform both the training-with-noise and the Hebbian unlearning procedures.Comment: 21 pages, 17 figures, main text and appendice

    Ongoing Spontaneous Activity Controls Access to Consciousness: A Neuronal Model for Inattentional Blindness

    Get PDF
    Even in the absence of sensory inputs, cortical and thalamic neurons can show structured patterns of ongoing spontaneous activity, whose origins and functional significance are not well understood. We use computer simulations to explore the conditions under which spontaneous activity emerges from a simplified model of multiple interconnected thalamocortical columns linked by long-range, top-down excitatory axons, and to examine its interactions with stimulus-induced activation. Simulations help characterize two main states of activity. First, spontaneous gamma-band oscillations emerge at a precise threshold controlled by ascending neuromodulator systems. Second, within a spontaneously active network, we observe the sudden “ignition” of one out of many possible coherent states of high-level activity amidst cortical neurons with long-distance projections. During such an ignited state, spontaneous activity can block external sensory processing. We relate those properties to experimental observations on the neural bases of endogenous states of consciousness, and particularly the blocking of access to consciousness that occurs in the psychophysical phenomenon of “inattentional blindness,” in which normal subjects intensely engaged in mental activity fail to notice salient but irrelevant sensory stimuli. Although highly simplified, the generic properties of a minimal network may help clarify some of the basic cerebral phenomena underlying the autonomy of consciousness

    Psofotopias

    Get PDF
    Psofotopias is an essay that observes how different media, cultural artifacts, and narratives depict dreams and the act of dreaming. I connect these popular media examples to critiques of contemporary technology and media theory, to brain science and philosophy of noise, and I conclude with a commentary on one of my recent art installations. This text is inspired by an interview with Geert Lovink and Ned Rossiter published under the title “Dreamful Computing”, which takes a quote by the late Bernard Stiegler as its prompt: “In order to do politics today, we must dream”. Their overall interest is that of “designing theories that don’t disavow the uncertainty, noise, and contingency of the situation of media”, an interest that I align with in refusing the often clear-cut separation of dreams and nightmares into either utopias or dystopias. By treating dreams with moral indeterminacy, a space for interpretation, analogy, and (trans)individuation is opened up. This space of ambiguity and uncertainty is what I call a “psofotopia”. Psofotopia is a portmanteau (psofos, noise; topos, place) that allows me to argue that dreams, with their cognitive, interpretive, and affective ambiguity, are “spaces of noise” that offer a model to deal with complexity. The dreaming, and its hallucinatory qualities, produces a para-conceptual and para-real psofotopia, a space that triggers collisions and clashes between the immaterial and the physical. An analogy can be drawn between the para-reality of dreams and that of the virtual space, initially imagined by cyberpunk authors as a consensual hallucination that would free humans from their bodily constraints, and eventually developed into an expensive technological infrastructure that aims to perpetuate interpassivity.  I will conclude with a discussion about my new work, NNNV XR, an expanded dream that utilizes VR in combination with a sound installation and custom-made transducing furniture producing haptic signals. The installation connects the virtual and the physical world, allowing the experiencer to dream along my dreams. By navigating the sensorial inputs of NNNV XR, the experiencer inhabits the multimedia psofotopia I propose while being aware of the threads that connect the immaterial and the tangible through the para-real

    Neuronal oscillations, information dynamics, and behaviour: an evolutionary robotics study

    Get PDF
    Oscillatory neural activity is closely related to cognition and behaviour, with synchronisation mechanisms playing a key role in the integration and functional organization of different cortical areas. Nevertheless, its informational content and relationship with behaviour - and hence cognition - are still to be fully understood. This thesis is concerned with better understanding the role of neuronal oscillations and information dynamics towards the generation of embodied cognitive behaviours and with investigating the efficacy of such systems as practical robot controllers. To this end, we develop a novel model based on the Kuramoto model of coupled phase oscillators and perform three minimally cognitive evolutionary robotics experiments. The analyses focus both on a behavioural level description, investigating the robot’s trajectories, and on a mechanism level description, exploring the variables’ dynamics and the information transfer properties within and between the agent’s body and the environment. The first experiment demonstrates that in an active categorical perception task under normal and inverted vision, networks with a definite, but not too strong, propensity for synchronisation are more able to reconfigure, to organise themselves functionally, and to adapt to different behavioural conditions. The second experiment relates assembly constitution and phase reorganisation dynamics to performance in supervised and unsupervised learning tasks. We demonstrate that assembly dynamics facilitate the evolutionary process, can account for varying degrees of stimuli modulation of the sensorimotor interactions, and can contribute to solving different tasks leaving aside other plasticity mechanisms. The third experiment explores an associative learning task considering a more realistic connectivity pattern between neurons. We demonstrate that networks with travelling waves as a default solution perform poorly compared to networks that are normally synchronised in the absence of stimuli. Overall, this thesis shows that neural synchronisation dynamics, when suitably flexible and reconfigurable, produce an asymmetric flow of information and can generate minimally cognitive embodied behaviours

    Theory of Brain Function, Quantum Mechanics and Superstrings

    Get PDF
    Recent developments/efforts to understand aspects of the brain function at the {\em sub-neural} level are discussed. MicroTubules (MTs) participate in a wide variety of dynamical processes in the cell, especially in bioinformation processes such as learning and memory, by possessing a well-known binary error-correcting code with 64 words. In fact, MTs and DNA/RNA are unique cell structures that possess a code system. It seems that the MTs' code system is strongly related to a kind of ``Mental Code" in the following sense. The MTs' periodic paracrystalline structure make them able to support a superposition of coherent quantum states, as it has been recently conjectured by Hameroff and Penrose, representing an external or mental order, for sufficient time needed for efficient quantum computing. Then the quantum superposition collapses spontaneously/dynamically through a new, string-derived mechanism for collapse proposed recently by Ellis, Mavromatos, and myself. At the moment of collapse, organized quantum exocytosis occurs, and this is how a ``{\em mental order}" may be translated into a ``{\em physiological action}". Our equation for quantum collapse, tailored to the MT system, predicts that it takes 10,000 neurons O(1sec){\cal O}(1\,{\rm sec}) to dynamically collapse (process and imprint information). Different observations/experiments and various schools of thought are in agreement with the above numbers concerning ``{\em conscious events}". If indeed MTs, may be considered as the {\em microsites of consciousness}, then several unexplained properties of consciousness/awareness, get easily explained, including ``{\em backward masking}", ``{\em referal backwards in time}". The {\em non-locality} in the cerebral cortex of neurons related to particular missions, and the related {\em unitary sense of self} as well asComment: 72 pages, 1 figure (uuencoded

    Micro-, Meso- and Macro-Dynamics of the Brain

    Get PDF
    Neurosciences, Neurology, Psychiatr

    The why of the phenomenal aspect of consciousness: Its main functions and the mechanisms underpinning it

    Get PDF
    What distinguishes conscious information processing from other kinds of information processing is its phenomenal aspect (PAC), the-what-it-is-like for an agent to experience something. The PAC supplies the agent with a sense of self, and informs the agent on how its self is affected by the agent’s own operations. The PAC originates from the activity that attention performs to detect the state of what I define “the self” (S). S is centered and develops on a hierarchy of innate and acquired values, and is primarily expressed via the central and peripheral nervous systems; it maps the agent’s body and cognitive capacities, and its interactions with the environment. The detection of the state of S by attention modulates the energy level of the organ of attention (OA), i.e., the neural substrate that underpins attention. This modulation generates the PAC. The PAC can be qualified according to five dimensions: qualitative, quantitative, hedonic, temporal and spatial. Each dimension can be traced back to a specific feature of the modulation of the energy level of the OA
    corecore