796 research outputs found

    Sensor fusion in distributed cortical circuits

    Get PDF
    The substantial motion of the nature is to balance, to survive, and to reach perfection. The evolution in biological systems is a key signature of this quintessence. Survival cannot be achieved without understanding the surrounding world. How can a fruit fly live without searching for food, and thereby with no form of perception that guides the behavior? The nervous system of fruit fly with hundred thousand of neurons can perform very complicated tasks that are beyond the power of an advanced supercomputer. Recently developed computing machines are made by billions of transistors and they are remarkably fast in precise calculations. But these machines are unable to perform a single task that an insect is able to do by means of thousands of neurons. The complexity of information processing and data compression in a single biological neuron and neural circuits are not comparable with that of developed today in transistors and integrated circuits. On the other hand, the style of information processing in neural systems is also very different from that of employed by microprocessors which is mostly centralized. Almost all cognitive functions are generated by a combined effort of multiple brain areas. In mammals, Cortical regions are organized hierarchically, and they are reciprocally interconnected, exchanging the information from multiple senses. This hierarchy in circuit level, also preserves the sensory world within different levels of complexity and within the scope of multiple modalities. The main behavioral advantage of that is to understand the real-world through multiple sensory systems, and thereby to provide a robust and coherent form of perception. When the quality of a sensory signal drops, the brain can alternatively employ other information pathways to handle cognitive tasks, or even to calibrate the error-prone sensory node. Mammalian brain also takes a good advantage of multimodal processing in learning and development; where one sensory system helps another sensory modality to develop. Multisensory integration is considered as one of the main factors that generates consciousness in human. Although, we still do not know where exactly the information is consolidated into a single percept, and what is the underpinning neural mechanism of this process? One straightforward hypothesis suggests that the uni-sensory signals are pooled in a ploy-sensory convergence zone, which creates a unified form of perception. But it is hard to believe that there is just one single dedicated region that realizes this functionality. Using a set of realistic neuro-computational principles, I have explored theoretically how multisensory integration can be performed within a distributed hierarchical circuit. I argued that the interaction of cortical populations can be interpreted as a specific form of relation satisfaction in which the information preserved in one neural ensemble must agree with incoming signals from connected populations according to a relation function. This relation function can be seen as a coherency function which is implicitly learnt through synaptic strength. Apart from the fact that the real world is composed of multisensory attributes, the sensory signals are subject to uncertainty. This requires a cortical mechanism to incorporate the statistical parameters of the sensory world in neural circuits and to deal with the issue of inaccuracy in perception. I argued in this thesis how the intrinsic stochasticity of neural activity enables a systematic mechanism to encode probabilistic quantities within neural circuits, e.g. reliability, prior probability. The systematic benefit of neural stochasticity is well paraphrased by the problem of Duns Scotus paradox: imagine a donkey with a deterministic brain that is exposed to two identical food rewards. This may make the animal suffer and die starving because of indecision. In this thesis, I have introduced an optimal encoding framework that can describe the probability function of a Gaussian-like random variable in a pool of Poisson neurons. Thereafter a distributed neural model is proposed that can optimally combine conditional probabilities over sensory signals, in order to compute Bayesian Multisensory Causal Inference. This process is known as a complex multisensory function in the cortex. Recently it is found that this process is performed within a distributed hierarchy in sensory cortex. Our work is amongst the first successful attempts that put a mechanistic spotlight on understanding the underlying neural mechanism of Multisensory Causal Perception in the brain, and in general the theory of decentralized multisensory integration in sensory cortex. Engineering information processing concepts in the brain and developing new computing technologies have been recently growing. Neuromorphic Engineering is a new branch that undertakes this mission. In a dedicated part of this thesis, I have proposed a Neuromorphic algorithm for event-based stereoscopic fusion. This algorithm is anchored in the idea of cooperative computing that dictates the defined epipolar and temporal constraints of the stereoscopic setup, to the neural dynamics. The performance of this algorithm is tested using a pair of silicon retinas

    Sensor fusion in distributed cortical circuits

    Get PDF
    The substantial motion of the nature is to balance, to survive, and to reach perfection. The evolution in biological systems is a key signature of this quintessence. Survival cannot be achieved without understanding the surrounding world. How can a fruit fly live without searching for food, and thereby with no form of perception that guides the behavior? The nervous system of fruit fly with hundred thousand of neurons can perform very complicated tasks that are beyond the power of an advanced supercomputer. Recently developed computing machines are made by billions of transistors and they are remarkably fast in precise calculations. But these machines are unable to perform a single task that an insect is able to do by means of thousands of neurons. The complexity of information processing and data compression in a single biological neuron and neural circuits are not comparable with that of developed today in transistors and integrated circuits. On the other hand, the style of information processing in neural systems is also very different from that of employed by microprocessors which is mostly centralized. Almost all cognitive functions are generated by a combined effort of multiple brain areas. In mammals, Cortical regions are organized hierarchically, and they are reciprocally interconnected, exchanging the information from multiple senses. This hierarchy in circuit level, also preserves the sensory world within different levels of complexity and within the scope of multiple modalities. The main behavioral advantage of that is to understand the real-world through multiple sensory systems, and thereby to provide a robust and coherent form of perception. When the quality of a sensory signal drops, the brain can alternatively employ other information pathways to handle cognitive tasks, or even to calibrate the error-prone sensory node. Mammalian brain also takes a good advantage of multimodal processing in learning and development; where one sensory system helps another sensory modality to develop. Multisensory integration is considered as one of the main factors that generates consciousness in human. Although, we still do not know where exactly the information is consolidated into a single percept, and what is the underpinning neural mechanism of this process? One straightforward hypothesis suggests that the uni-sensory signals are pooled in a ploy-sensory convergence zone, which creates a unified form of perception. But it is hard to believe that there is just one single dedicated region that realizes this functionality. Using a set of realistic neuro-computational principles, I have explored theoretically how multisensory integration can be performed within a distributed hierarchical circuit. I argued that the interaction of cortical populations can be interpreted as a specific form of relation satisfaction in which the information preserved in one neural ensemble must agree with incoming signals from connected populations according to a relation function. This relation function can be seen as a coherency function which is implicitly learnt through synaptic strength. Apart from the fact that the real world is composed of multisensory attributes, the sensory signals are subject to uncertainty. This requires a cortical mechanism to incorporate the statistical parameters of the sensory world in neural circuits and to deal with the issue of inaccuracy in perception. I argued in this thesis how the intrinsic stochasticity of neural activity enables a systematic mechanism to encode probabilistic quantities within neural circuits, e.g. reliability, prior probability. The systematic benefit of neural stochasticity is well paraphrased by the problem of Duns Scotus paradox: imagine a donkey with a deterministic brain that is exposed to two identical food rewards. This may make the animal suffer and die starving because of indecision. In this thesis, I have introduced an optimal encoding framework that can describe the probability function of a Gaussian-like random variable in a pool of Poisson neurons. Thereafter a distributed neural model is proposed that can optimally combine conditional probabilities over sensory signals, in order to compute Bayesian Multisensory Causal Inference. This process is known as a complex multisensory function in the cortex. Recently it is found that this process is performed within a distributed hierarchy in sensory cortex. Our work is amongst the first successful attempts that put a mechanistic spotlight on understanding the underlying neural mechanism of Multisensory Causal Perception in the brain, and in general the theory of decentralized multisensory integration in sensory cortex. Engineering information processing concepts in the brain and developing new computing technologies have been recently growing. Neuromorphic Engineering is a new branch that undertakes this mission. In a dedicated part of this thesis, I have proposed a Neuromorphic algorithm for event-based stereoscopic fusion. This algorithm is anchored in the idea of cooperative computing that dictates the defined epipolar and temporal constraints of the stereoscopic setup, to the neural dynamics. The performance of this algorithm is tested using a pair of silicon retinas

    Bayesian Cognitive Science, Unification, and Explanation

    Get PDF
    It is often claimed that the greatest value of the Bayesian framework in cognitive science consists in its unifying power. Several Bayesian cognitive scientists assume that unification is obviously linked to explanatory power. But this link is not obvious, as unification in science is a heterogeneous notion, which may have little to do with explanation. While a crucial feature of most adequate explanations in cognitive science is that they reveal aspects of the causal mechanism that produces the phenomenon to be explained, the kind of unification afforded by the Bayesian framework to cognitive science does not necessarily reveal aspects of a mechanism. Bayesian unification, nonetheless, can place fruitful constraints on causal-mechanical explanation

    Bayesian Cognitive Science, Unification, and Explanation

    Get PDF
    It is often claimed that the greatest value of the Bayesian framework in cognitive science consists in its unifying power. Several Bayesian cognitive scientists assume that unification is obviously linked to explanatory power. But this link is not obvious, as unification in science is a heterogeneous notion, which may have little to do with explanation. While a crucial feature of most adequate explanations in cognitive science is that they reveal aspects of the causal mechanism that produces the phenomenon to be explained, the kind of unification afforded by the Bayesian framework to cognitive science does not necessarily reveal aspects of a mechanism. Bayesian unification, nonetheless, can place fruitful constraints on causal–mechanical explanation. 1 Introduction2 What a Great Many Phenomena Bayesian Decision Theory Can Model3 The Case of Information Integration4 How Do Bayesian Models Unify?5 Bayesian Unification: What Constraints Are There on Mechanistic Explanation?5.1 Unification constrains mechanism discovery5.2 Unification constrains the identification of relevant mechanistic factors5.3 Unification constrains confirmation of competitive mechanistic models6 ConclusionAppendix

    What is neurorepresentationalism?:From neural activity and predictive processing to multi-level representations and consciousness

    Get PDF
    This review provides an update on Neurorepresentationalism, a theoretical framework that defines conscious experience as multimodal, situational survey and explains its neural basis from brain systems constructing best-guess representations of sensations originating in our environment and body (Pennartz, 2015)

    Neural Correlates of Multisensory Integration for Feedback Stabilization of the Wrist

    Get PDF
    Robust control of action relies on the ability to perceive, integrate, and act on information from multiple sensory modalities including vision and proprioception. How does the brain combine sensory information to regulate ongoing mechanical interactions between the body and its physical environment? Some behavioral studies suggest that the rules governing multisensory integration for action may differ from the maximum likelihood estimation rules that appear to govern multisensory integration for many perceptual tasks. We used functional magnetic resonance (MR) imaging techniques, a MR-compatible robot, and a multisensory feedback control task to test that hypothesis by investigating how neural mechanisms involved in regulating hand position against mechanical perturbation respond to the presence and fidelity of visual and proprioceptive information. Healthy human subjects rested supine in a MR scanner and stabilized their wrist against constant or pseudo-random torque perturbations imposed by the robot. These two stabilization tasks were performed under three visual feedback conditions: “No-vision”: Subjects had to rely solely on proprioceptive feedback; “true-vision”: visual cursor and hand motions were congruent; and “random-vision”: cursor and hand motions were uncorrelated in time. Behaviorally, performance errors accumulated more quickly during trials wherein visual feedback was absent or incongruous. We analyzed blood-oxygenation level-dependent (BOLD) signal fluctuations to compare task-related activations in a cerebello-thalamo-cortical neural circuit previously linked with feedback stabilization of the hand. Activation in this network varied systematically depending on the presence and fidelity of visual feedback of task performance. Addition of task related visual information caused activations in the cerebello-thalamo-cortical network to expand into neighboring brain regions. Specific loci and intensity of expanded activity depended on the fidelity of visual feedback. Remarkably, BOLD signal fluctuations within these regions correlated strongly with the time series of proprioceptive errors—but not visual errors—when the fidelity of visual feedback was poor, even though visual and hand motions had similar variability characteristics. These results provide insight into the neural control of the body’s physical interactions with its environment, rejecting the standard Gaussian cue combination model of multisensory integration in favor of models that account for causal structure in the sensory feedback

    Causal inference in multisensory perception and the brain

    Get PDF
    To build coherent and veridical multisensory representations of the environment, human observers consider the causal structure of multisensory signals: If they infer a common source of the signals, observers integrate them weighted by their reliability. Otherwise, they segregate the signals. Generally, observers infer a common source if the signals correspond structurally and spatiotemporally. In six projects, the current PhD thesis investigated this causal inference model with the help of audiovisual spatial signals presented to human observers in a ventriloquist paradigm. A first psychophysical study showed that sensory reliability determines causal inference via two mechanisms: Sensory reliability modulates how observers infer the causal structure from spatial signal disparity. Further, sensory reliability determines the weight of audiovisual signals if observers integrate the signals under assumption of a common source. Using multivariate decoding of fMRI signals, three PhD projects revealed that auditory and visual cortical hierarchies jointly implement causal inference. Specific regions of the hierarchies represented constituent spatial estimates of the causal inference model. In line with this model, anterior regions of intraparietal sulcus (IPS) represent audiovisual signals dependent on visual reliability, task-relevance, and spatial disparity of the signals. However, even in case of small signal discrepancies suggesting a common source, reliability-weighting in IPS was suboptimal as compared to a Maximum Estimation Likelihood model. By temporally manipulating visual reliability, the fifth PhD project demonstrated that human observers learn sensory reliability from current and past signals in order to weight audiovisual signals, consistent with a Bayesian learner. Finally, the sixth project showed that if visual flashes were rendered unaware by continuous flash suppression, the visual bias of the perceived auditory location was strongly reduced but still significant. The reduced ventriloquist effect was presumably mediated by the drop of visual reliability accompanying perceptual unawareness. In conclusion, the PhD thesis suggests that human observers integrate multisensory signals according to their causal structure and temporal regularity: They integrate the signals if a common source is likely by weighting them proportional to the reliability which they learnt from the signals’ history. Crucially, specific regions of cortical hierarchies jointly implement these multisensory processes

    Sensory processing in autism across exteroceptive and interoceptive domains

    Get PDF
    Objective: The current article discusses recent literature on perceptual processing in autism and aims to provide a critical review of existing theories of autistic perception and suggestions for future work. Method: We review findings detailing exteroceptive and interoceptive processing in autism and discuss their neurobiological basis as well as potential links and analogies between sensory domains. Results: Many atypicalities of autistic perception described in the literature can be explained either by weak neural synchronization or by atypical perceptual inference. Evidence for both mechanisms is found across the different sensory domains considered in this review. Conclusions: We argue that weak neural synchronization and atypical perceptual inference might be complementary neural mechanisms that describe the complex bottom-up and top-down differences of autistic perception, respectively. Future work should be sensitive to individual differences to determine if divergent patterns of sensory processing are observed within individuals, rather than looking for global changes at a group level. Determining whether divergent patterns of exteroceptive and interoceptive sensory processing may contribute to prototypical social and cognitive characteristics of autism may drive new directions in our conceptualization of autism

    Sensory processing in autism across exteroceptive and interoceptive domains

    Get PDF
    Objective: The current article discusses recent literature on perceptual processing in autism and aims to provide a critical review of existing theories of autistic perception and suggestions for future work. Method: We review findings detailing exteroceptive and interoceptive processing in autism and discuss their neurobiological basis as well as potential links and analogies between sensory domains. Results: Many atypicalities of autistic perception described in the literature can be explained either by weak neural synchronization or by atypical perceptual inference. Evidence for both mechanisms is found across the different sensory domains considered in this review. Conclusions: We argue that weak neural synchronization and atypical perceptual inference might be complementary neural mechanisms that describe the complex bottom-up and top-down differences of autistic perception, respectively. Future work should be sensitive to individual differences to determine if divergent patterns of sensory processing are observed within individuals, rather than looking for global changes at a group level. Determining whether divergent patterns of exteroceptive and interoceptive sensory processing may contribute to prototypical social and cognitive characteristics of autism may drive new directions in our conceptualization of autis
    • …
    corecore