5,469 research outputs found

    Can biological quantum networks solve NP-hard problems?

    Full text link
    There is a widespread view that the human brain is so complex that it cannot be efficiently simulated by universal Turing machines. During the last decades the question has therefore been raised whether we need to consider quantum effects to explain the imagined cognitive power of a conscious mind. This paper presents a personal view of several fields of philosophy and computational neurobiology in an attempt to suggest a realistic picture of how the brain might work as a basis for perception, consciousness and cognition. The purpose is to be able to identify and evaluate instances where quantum effects might play a significant role in cognitive processes. Not surprisingly, the conclusion is that quantum-enhanced cognition and intelligence are very unlikely to be found in biological brains. Quantum effects may certainly influence the functionality of various components and signalling pathways at the molecular level in the brain network, like ion ports, synapses, sensors, and enzymes. This might evidently influence the functionality of some nodes and perhaps even the overall intelligence of the brain network, but hardly give it any dramatically enhanced functionality. So, the conclusion is that biological quantum networks can only approximately solve small instances of NP-hard problems. On the other hand, artificial intelligence and machine learning implemented in complex dynamical systems based on genuine quantum networks can certainly be expected to show enhanced performance and quantum advantage compared with classical networks. Nevertheless, even quantum networks can only be expected to efficiently solve NP-hard problems approximately. In the end it is a question of precision - Nature is approximate.Comment: 38 page

    Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 127, April 1974

    Get PDF
    This special bibliography lists 279 reports, articles, and other documents introduced into the NASA scientific and technical information system in March 1974

    Computational Models of Timing Mechanisms in the Cerebellar Granular Layer

    Get PDF
    A long-standing question in neuroscience is how the brain controls movement that requires precisely timed muscle activations. Studies using Pavlovian delay eyeblink conditioning provide good insight into this question. In delay eyeblink conditioning, which is believed to involve the cerebellum, a subject learns an interstimulus interval (ISI) between the onsets of a conditioned stimulus (CS) such as a tone and an unconditioned stimulus such as an airpuff to the eye. After a conditioning phase, the subject’s eyes automatically close or blink when the ISI time has passed after CS onset. This timing information is thought to be represented in some way in the cerebellum. Several computational models of the cerebellum have been proposed to explain the mechanisms of time representation, and they commonly point to the granular layer network. This article will review these computational models and discuss the possible computational power of the cerebellum

    Valentino Braitenberg: From neuroanatomy to behavior and back

    Get PDF
    This article compiles an expose of Valentino Braitenberg's singular view on neuroanatomy and neuroscience. The review emphasizes his topologically informed work on neuroanatomy and his dialectics of brain-based explanations of motor behavior. Some of his early ideas on topologically informed neuroanatomy are presented, together with some of his more obscure work on the taxonomy of neural fiber bundles and synaptic arborizations. His functionally informed interpretations of neuroanatomy of the cerebellum, cortex, and hippocampus, are introduced. Finally, we will touch on his philosophical views and the inextricable role of function in the explanation of neural behavior

    Inside the brain of an elite athlete: The neural processes that support high achievement in sports

    Get PDF
    Events like the World Championships in athletics and the Olympic Games raise the public profile of competitive sports. They may also leave us wondering what sets the competitors in these events apart from those of us who simply watch. Here we attempt to link neural and cognitive processes that have been found to be important for elite performance with computational and physiological theories inspired by much simpler laboratory tasks. In this way we hope to inspire neuroscientists to consider how their basic research might help to explain sporting skill at the highest levels of performance

    Anterior Insula Activity Reflects the Effects of Intentionality on the Anticipation of Aversive Stimulation

    Get PDF
    If someone causes you harm, your affective reaction to that person might be profoundly influenced by your inferences about the intentionality of their actions. In the present study, we aimed to understand how affective responses to a biologically salient aversive outcome administered by others are modulated by the extent to which a given individual is judged to have deliberately or inadvertently delivered the outcome. Using fMRI, we examined how neural responses to anticipation and receipt of an aversive stimulus are modulated by this fundamental social judgment. We found that affective evaluations about an individual whose actions led to either noxious or neutral consequences for the subject did indeed depend on the perceived intentions of that individual. At the neural level, activity in the anterior insula correlated with the interaction between perceived intentionality and anticipated outcome valence, suggesting that this region reflects the influence of mental state attribution on aversive expectations

    A Neurorobotic Embodiment for Exploring the Dynamical Interactions of a Spiking Cerebellar Model and a Robot Arm During Vision-based Manipulation Tasks

    Full text link
    While the original goal for developing robots is replacing humans in dangerous and tedious tasks, the final target shall be completely mimicking the human cognitive and motor behaviour. Hence, building detailed computational models for the human brain is one of the reasonable ways to attain this. The cerebellum is one of the key players in our neural system to guarantee dexterous manipulation and coordinated movements as concluded from lesions in that region. Studies suggest that it acts as a forward model providing anticipatory corrections for the sensory signals based on observed discrepancies from the reference values. While most studies consider providing the teaching signal as error in joint-space, few studies consider the error in task-space and even fewer consider the spiking nature of the cerebellum on the cellular-level. In this study, a detailed cellular-level forward cerebellar model is developed, including modeling of Golgi and Basket cells which are usually neglected in previous studies. To preserve the biological features of the cerebellum in the developed model, a hyperparameter optimization method tunes the network accordingly. The efficiency and biological plausibility of the proposed cerebellar-based controller is then demonstrated under different robotic manipulation tasks reproducing motor behaviour observed in human reaching experiments

    Toward bio-inspired information processing with networks of nano-scale switching elements

    Full text link
    Unconventional computing explores multi-scale platforms connecting molecular-scale devices into networks for the development of scalable neuromorphic architectures, often based on new materials and components with new functionalities. We review some work investigating the functionalities of locally connected networks of different types of switching elements as computational substrates. In particular, we discuss reservoir computing with networks of nonlinear nanoscale components. In usual neuromorphic paradigms, the network synaptic weights are adjusted as a result of a training/learning process. In reservoir computing, the non-linear network acts as a dynamical system mixing and spreading the input signals over a large state space, and only a readout layer is trained. We illustrate the most important concepts with a few examples, featuring memristor networks with time-dependent and history dependent resistances
    corecore