44,562 research outputs found

    A predictive processing theory of sensorimotor contingencies: explaining the puzzle of perceptual presence and its absence in synesthesia

    Get PDF
    Normal perception involves experiencing objects within perceptual scenes as real, as existing in the world. This property of “perceptual presence” has motivated “sensorimotor theories” which understand perception to involve the mastery of sensorimotor contingencies. However, the mechanistic basis of sensorimotor contingencies and their mastery has remained unclear. Sensorimotor theory also struggles to explain instances of perception, such as synesthesia, that appear to lack perceptual presence and for which relevant sensorimotor contingencies are difficult to identify. On alternative “predictive processing” theories, perceptual content emerges from probabilistic inference on the external causes of sensory signals, however, this view has addressed neither the problem of perceptual presence nor synesthesia. Here, I describe a theory of predictive perception of sensorimotor contingencies which (1) accounts for perceptual presence in normal perception, as well as its absence in synesthesia, and (2) operationalizes the notion of sensorimotor contingencies and their mastery. The core idea is that generative models underlying perception incorporate explicitly counterfactual elements related to how sensory inputs would change on the basis of a broad repertoire of possible actions, even if those actions are not performed. These “counterfactually-rich” generative models encode sensorimotor contingencies related to repertoires of sensorimotor dependencies, with counterfactual richness determining the degree of perceptual presence associated with a stimulus. While the generative models underlying normal perception are typically counterfactually rich (reflecting a large repertoire of possible sensorimotor dependencies), those underlying synesthetic concurrents are hypothesized to be counterfactually poor. In addition to accounting for the phenomenology of synesthesia, the theory naturally accommodates phenomenological differences between a range of experiential states including dreaming, hallucination, and the like. It may also lead to a new view of the (in)determinacy of normal perception

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)

    How we see

    Get PDF
    The visual world is imaged on the retinas of our eyes. However, "seeing"' is not a result of neural functions within the eyes but rather a result of what the brain does with those images. Our visual perceptions are produced by parts of the cerebral cortex dedicated to vision. Although our visual awareness appears unitary, different parts of the cortex analyze color, shape, motion, and depth information. There are also special mechanisms for visual attention, spatial awareness, and the control of actions under visual guidance. Often lesions from stroke or other neurological diseases will impair one of these subsystems, leading to unusual deficits such as the inability to recognize faces, the loss of awareness of half of visual space, or the inability to see motion or color

    A Model of the Ventral Visual System Based on Temporal Stability and Local Memory

    Get PDF
    The cerebral cortex is a remarkably homogeneous structure suggesting a rather generic computational machinery. Indeed, under a variety of conditions, functions attributed to specialized areas can be supported by other regions. However, a host of studies have laid out an ever more detailed map of functional cortical areas. This leaves us with the puzzle of whether different cortical areas are intrinsically specialized, or whether they differ mostly by their position in the processing hierarchy and their inputs but apply the same computational principles. Here we show that the computational principle of optimal stability of sensory representations combined with local memory gives rise to a hierarchy of processing stages resembling the ventral visual pathway when it is exposed to continuous natural stimuli. Early processing stages show receptive fields similar to those observed in the primary visual cortex. Subsequent stages are selective for increasingly complex configurations of local features, as observed in higher visual areas. The last stage of the model displays place fields as observed in entorhinal cortex and hippocampus. The results suggest that functionally heterogeneous cortical areas can be generated by only a few computational principles and highlight the importance of the variability of the input signals in forming functional specialization

    How lateral inhibition and fast retinogeniculo-cortical oscillations create vision: A new hypothesis

    Get PDF
    The role of the physiological processes involved in human vision escapes clarification in current literature. Many unanswered questions about vision include: 1) whether there is more to lateral inhibition than previously proposed, 2) the role of the discs in rods and cones, 3) how inverted images on the retina are converted to erect images for visual perception, 4) what portion of the image formed on the retina is actually processed in the brain, 5) the reason we have an after-image with antagonistic colors, and 6) how we remember space. This theoretical article attempts to clarify some of the physiological processes involved with human vision. The global integration of visual information is conceptual; therefore, we include illustrations to present our theory. Universally, the eyeball is 2.4 cm and works together with membrane potential, correspondingly representing the retinal layers,photoreceptors, and cortex. Images formed within the photoreceptors must first be converted into chemical signals on the photoreceptors’ individual discs and the signals at each disc are transduced from light photons into electrical signals. We contend that the discs code the electrical signals into accurate distances and are shown in our figures. The pre-existing oscillations among the various cortices including the striate and parietal cortex,and the retina work in unison to create an infrastructure of visual space that functionally ‘‘places” the objects within this ‘‘neural” space. The horizontal layers integrate all discs accurately to create a retina that is pre-coded for distance. Our theory suggests image inversion never takes place on the retina,but rather images fall onto the retina as compressed and coiled, then amplified through lateral inhibition through intensification and amplification on the OFF-center cones. The intensified and amplified images are decompressed and expanded in the brain, which become the images we perceive as external vision

    Functional and Neural Mechanisms of Out-of-Body Experiences: Importance of Retinogeniculo-Cortical Oscillations

    Get PDF
    Current research on the various forms of autoscopic phenomena addresses the clinical and neurological correlates of out-of-body experiences, autoscopic hallucinations,and heautoscopy. Yet most of this research is based on functional magnetic resonance imaging results and focuses predominantly on abnormal cortical activity. Previously we proposed that visual consciousness resulted from the dynamic retinogeniculo-cortical oscillations, such that the photoreceptors dynamically integrated with visual and other vision-associated cortices, and was theorized to be mapped out by photoreceptor discs and rich retinal networks which synchronized with the retinotopic mapping and the associated cortex. The feedback from neural input that is received from the thalamus and cortex via retinogeniculo-cortical oscillations and sent to the retina is multifold higher than feed-forward input to the cortex. This can effectively translate into out-of-body experiences projected onto the screen formed by the retina as it is perceived via feedback and feed-forward oscillations from the reticular thalamic nucleus, or “internal searchlight”. This article explores the role of the reticular thalamic nucleus and the retinogeniculo-cortical oscillations as pivotal internal components in vision and various autoscopic phenomena

    Cortical Dynamics of Navigation and Steering in Natural Scenes: Motion-Based Object Segmentation, Heading, and Obstacle Avoidance

    Full text link
    Visually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. The ViSTARS neural model proposes how primates use motion information to segment objects and determine heading for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by predicting how computationally complementary processes in cortical areas MT-/MSTv and MT+/MSTd compute object motion for tracking and self-motion for navigation, respectively. The model retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT+ interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. Model MT interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance.National Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National Geospatial Intelligence Agency (NMA201-01-1-2016

    The logic of forbidden colours

    Get PDF
    The purpose of this paper is twofold: (1) to clarify Ludwig Wittgenstein’s thesis that colours possess logical structures, focusing on his ‘puzzle proposition’ that “there can be a bluish green but not a reddish green”, (2) to compare modeltheoretical and gametheoretical approaches to the colour exclusion problem. What is gained, then, is a new gametheoretical framework for the logic of ‘forbidden’ (e.g., reddish green and bluish yellow) colours. My larger aim is to discuss phenomenological principles of the demarcation of the bounds of logic as formal ontology of abstract objects
    corecore