346,343 research outputs found

    A Bayesian model for visual space perception

    Get PDF
    A model for visual space perception is proposed that contains desirable features in the theories of Gibson and Brunswik. This model is a Bayesian processor of proximal stimuli which contains three important elements: an internal model of the Markov process describing the knowledge of the distal world, the a priori distribution of the state of the Markov process, and an internal model relating state to proximal stimuli. The universality of the model is discussed and it is compared with signal detection theory models. Experimental results of Kinchla are used as a special case

    Contextual Feedback to Superficial Layers of V1

    Get PDF
    Neuronal cortical circuitry comprises feedforward, lateral, and feedback projections, each of which terminates in distinct cortical layers [1-3]. In sensory systems, feedforward processing transmits signals from the external world into the cortex, whereas feedback pathways signal the brain's inference of the world [4-11]. However, the integration of feedforward, lateral, and feedback inputs within each cortical area impedes the investigation of feedback, and to date, no technique has isolated the feedback of visual scene information in distinct layers of healthy human cortex. We masked feedforward input to a region of V1 cortex and studied the remaining internal processing. Using high-resolution functional brain imaging (0.8 mm(3)) and multivoxel pattern information techniques, we demonstrate that during normal visual stimulation scene information peaks in mid-layers. Conversely, we found that contextual feedback information peaks in outer, superficial layers. Further, we found that shifting the position of the visual scene surrounding the mask parametrically modulates feedback in superficial layers of V1. Our results reveal the layered cortical organization of external versus internal visual processing streams during perception in healthy human subjects. We provide empirical support for theoretical feedback models such as predictive coding [10, 12] and coherent infomax [13] and reveal the potential of high-resolution fMRI to access internal processing in sub-millimeter human cortex

    It was (not) me: Causal Inference of Agency in goal-directed actions

    Get PDF
    Summary: 
The perception of one’s own actions depends on both sensory information and predictions derived from internal forward models [1]. The integration of these information sources depends critically on whether perceptual consequences are associated with one’s own action (sense of agency) or with changes in the external world that are not related to the action. The perceived effects of actions should thus critically depend on the consistency between the predicted and the actual sensory consequences of actions. To test this idea, we used a virtual-reality setup to manipulate the consistency between pointing movements and their visual consequences and investigated the influence of this manipulation on self-action perception. We then asked whether a Bayesian causal inference model, which assumes a latent agency variable controlling the attributed influence of the own action on perceptual consequences [2,3], would account for the empirical data: if the percept was attributed to the own action, visual and internal information should fuse in a Bayesian optimal manner, while this should not be the case if the visual stimulus was attributed to external influences. The model correctly fits the data, showing that small deviations between predicted and actual sensory information were still attributed to one’s own action, while this was not the case for large deviations when subjects relied more on internal information. We discuss the performance of this causal inference model in comparison to alternative biologically feasible statistical models applying methods for Bayesian model comparison.

Experiment: 
Participants were seated in front of a horizontal board on which their right hand was placed with the index finger on a haptic marker, representing the starting point for each trial. Participants were instructed to execute straight, fast (quasi-ballistic) pointing movements of fixed amplitude, but without an explicit visual target. The hand was obstructed from the view of the participants, and visual feedback about the peripheral part of the movement was provided by a cursor. Feedback was either veridical or rotated against the true direction of the hand movement by predefined angles. After each trial participants were asked to report the subjectively experienced direction of the executed hand movement by placing a mouse-cursor into that direction.

Model: 
We compared two probabilistic models: Both include a binary random gating variable (agency) that models the sense of ‘agency’; that is the belief that the visual feedback is influenced by the subject’s motor action. The first model assumes that both the visual feedback xv and the internal motor state estimate xe are directly caused by the (unobserved) real motor state xt (Fig. 1). The second model assumes instead that the expected visual feedback depends on the perceived direction of the own motor action xe (Fig. 2). 
Results: Both models are in good agreement with the data. Fig. A shows the model fit for Model 1 superpositioned to the data from a single subject. Fig. B shows the belief that the visual stimulus was influenced by the own action, which decreases for large deviations between predicted and real visual feedback. Bayesian model comparison shows a better fit for model 1.
Citations
[1] Wolpert D.M, Ghahramani, Z, Jordan, M. (1995) Science, 269, 1880-1882.
[2] Körding KP, Beierholm E, Ma WJ, Quartz S, Tenenbaum JB, et al (2007) PLoS ONE 2(9): e943.
[3] Shams, L., Beierholm, U. (2010) TiCS, 14: 425-432.
Acknowledgements
This work was supported by the BCCN Tübingen (FKZ: 01GQ1002), the CIN Tübingen, the European Union (FP7-ICT-215866 project SEARISE), the DFG and the Hermann and Lilly Schilling Foundation

    Visual Imagery and Perception Share Neural Representations in the Alpha Frequency Band

    Get PDF
    To behave adaptively with sufficient flexibility, biological organisms must cognize beyond immediate reaction to a physically present stimulus. For this, humans use visual mental imagery [1, 2], the ability to conjure up a vivid internal experience from memory that stands in for the percept of the stimulus. Visually imagined contents subjectively mimic perceived contents, suggesting that imagery and perception share common neural mechanisms. Using multivariate pattern analysis on human electroencephalography (EEG) data, we compared the oscillatory time courses of mental imagery and perception of objects. We found that representations shared between imagery and perception emerged specifically in the alpha frequency band. These representations were present in posterior, but not anterior, electrodes, suggesting an origin in parieto-occipital cortex. Comparison of the shared representations to computational models using representational similarity analysis revealed a relationship to later layers of deep neural networks trained on object representations, but not auditory or semantic models, suggesting representations of complex visual features as the basis of commonality. Together, our results identify and characterize alpha oscillations as a cortical signature of representations shared between visual mental imagery and perception

    Models of Speed Discrimination

    Get PDF
    The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks

    On interference effects in concurrent perception and action

    Get PDF
    Recent studies have reported repulsion effects between the perception of visual motion and the concurrent production of hand movements. Two models, based on the notions of common coding and internal forward modeling, have been proposed to account for these phenomena. They predict that the size of the effects in perception and action should be monotonically related and vary with the amount of similarity between what is produced and perceived. These predictions were tested in four experiments in which participants were asked to make hand movements in certain directions while simultaneously encoding the direction of an independent stimulus motion. As expected, perceived directions were repelled by produced directions, and produced directions were repelled by perceived directions. However, contrary to the models, the size of the effects in perception and action did not covary, nor did they depend (as predicted) on the amount of perception–action similarity. We propose that such interactions are mediated by the activation of categorical representations

    Compensatory shifts in visual perception are associated with hallucinations in Lewy body disorders

    Get PDF
    Abstract Visual hallucinations are a common, distressing, and disabling symptom of Lewy body and other diseases. Current models suggest that interactions in internal cognitive processes generate hallucinations. However, these neglect external factors. Pareidolic illusions are an experimental analogue of hallucinations. They are easily induced in Lewy body disease, have similar content to spontaneous hallucinations, and respond to cholinesterase inhibitors in the same way. We used a primed pareidolia task with hallucinating participants with Lewy body disorders (n = 16), non-hallucinating participants with Lewy body disorders (n = 19), and healthy controls (n = 20). Participants were presented with visual “noise” that sometimes contained degraded visual objects and were required to indicate what they saw. Some perceptions were cued in advance by a visual prime. Results showed that hallucinating participants were impaired in discerning visual signals from noise, with a relaxed criterion threshold for perception compared to both other groups. After the presentation of a visual prime, the criterion was comparable to the other groups. The results suggest that participants with hallucinations compensate for perceptual deficits by relaxing perceptual criteria, at a cost of seeing things that are not there, and that visual cues regularize perception. This latter finding may provide a mechanism for understanding the interaction between environments and hallucinations

    Distinct lower visual field preference for object shape

    Get PDF
    YesHumans manipulate objects chiefly within their lower visual field, a consequence of upright posture and the anatomical position of hands and arms.This study tested the hypothesis of enhanced sensitivity to a range of stimuli within the lower visual field. Following current models of hierarchical processing within the ventral steam, discrimination sensitivity was measured for orientation, curvature, shape (radial frequency patterns), and faces at various para-central locations (horizontal, vertical, and main diagonal meridians) and eccentricities (5° and 10°). Peripheral sensitivity was isotropic for orientation and curvature. By contrast, observers were significantly better at discriminating shapes throughout the lower visual field compared to elsewhere. For faces, however, peak sensitivity was found in the left visual field, corresponding to the right hemispheric localization of human face processing. Presenting head outlines without any internal features (e.g., eyes, mouth) recovered the lower visual field advantage found for simple shapes. A lower visual field preference for the shape of an object, which is absent for more localized information (orientation and curvature) but also for more complex objects (faces), is inconsistent with a strictly feed-forward model and poses a challenge for multistage models of object perception. The distinct lower visual field preference for contour shapes is, however, consistent with an asymmetry at intermediate stages of visual processing, which may play a key role in representing object characteristics that are particularly relevant to visually guided actions

    The effects of emotional states and traits on time perception

    Get PDF
    Background: Models of time perception share an element of scalar expectancy theory known as the internal clock, containing specific mechanisms by which the brain is able to experience time passing and function effectively. A debate exists about whether to treat factors that influence these internal clock mechanisms (e.g., emotion, personal- ity, executive functions, and related neurophysiological components) as arousal- or attentional-based factors. Purpose: This study investigated behavioral and neurophysiological responses to an affective time perception Go/ NoGo task, taking into account the behavioral inhibition (BIS) and behavioral activation systems (BASs), which are components of reinforcement sensitivity theory. Methods: After completion of self-report inventories assessing personality traits, electroencephalogram (EEG/ERP) and behavioral recordings of 32 women and 13 men recruited from introductory psychology classes were completed during an affective time perception Go/NoGo task. This task required participants to respond (Go) and inhibit (NoGo) to positive and negative affective visual stimuli of various durations in comparison to a standard duration. Results: Higher BAS scores (especially BAS Drive) were associated with overestimation bias scores for positive stimuli, while BIS scores were not correlated with overestimation bias scores. Furthermore, higher BIS Total scores were associ- ated with higher N2d amplitudes during positive stimulus presentation for 280 ms, while higher BAS Total scores were associated with higher N2d amplitudes during negative stimuli presentation for 910 ms. Discussion: Findings are discussed in terms of arousal-based models of time perception, and suggestions for future research are considered

    Modellierung der kognitiven Säuglingsentwicklung mittels neuronaler Netze

    Get PDF
    This thesis investigates the development of early cognition in infancy using neural network models. Fundamental events in visual perception such as caused motion, occlusion, object permanence, tracking of moving objects behind occluders, object unity perception and sequence learning are modeled in a unifying computational framework while staying close to experimental data in developmental psychology of infancy. In the first project, the development of causality and occlusion perception in infancy is modeled using a simple, three-layered, recurrent network trained with error backpropagation to predict future inputs (Elman network). The model unifies two infant studies on causality and occlusion perception. Subsequently, in the second project, the established framework is extended to a larger prediction network that models the development of object unity, object permanence and occlusion perception in infancy. It is shown that these different phenomena can be unified into a single theoretical framework thereby explaining experimental data from 14 infant studies. The framework shows that these developmental phenomena can be explained by accurately representing and predicting statistical regularities in the visual environment. The models assume (1) different neuronal populations processing different motion directions of visual stimuli in the visual cortex of the newborn infant which are supported by neuroscientific evidence and (2) available learning algorithms that are guided by the goal of predicting future events. Specifically, the models demonstrate that no innate force notions, motion analysis modules, common motion detectors, specific perceptual rules or abilities to "reason" about entities which have been widely postulated in the developmental literature are necessary for the explanation of the discussed phenomena. Since the prediction of future events turned out to be fruitful for theoretical explanation of various developmental phenomena and a guideline for learning in infancy, the third model addresses the development of visual expectations themselves. A self-organising, fully recurrent neural network model that forms internal representations of input sequences and maps them onto eye movements is proposed. The reinforcement learning architecture (RLA) of the model learns to perform anticipatory eye movements as observed in a range of infant studies. The model suggests that the goal of maximizing the looking time at interesting stimuli guides infants' looking behavior thereby explaining the occurrence and development of anticipatory eye movements and reaction times. In contrast to classical neural network modelling approaches in the developmental literature, the model uses local learning rules and contains several biologically plausible elements like excitatory and inhibitory spiking neurons, spike-timing dependent plasticity (STDP), intrinsic plasticity (IP) and synaptic scaling. It is also novel from the technical point of view as it uses a dynamic recurrent reservoir shaped by various plasticity mechanisms and combines it with reinforcement learning. The model accounts for twelve experimental studies and predicts among others anticipatory behavior for arbitrary sequences and facilitated reacquisition of already learned sequences. All models emphasize the development of the perception of the discussed phenomena thereby addressing the questions of how and why this developmental change takes place - questions that are difficult to be assessed experimentally. Despite the diversity of the discussed phenomena all three projects rely on the same principle: the prediction of future events. This principle suggests that cognitive development in infancy may largely be guided by building internal models and representations of the visual environment and using those models to predict its future development.Die vorliegende Dissertation untersucht die Entwicklung früher kognitiver Fähigkeiten im Säuglingsalter mit neuronalen Netzen. Grundlegende Ereignisse in der visuellen Wahrnehmung wie durch Stöße verursachte Bewegung, Verdeckung, Objektpermanenz, Verfolgen bewegter Objekte hinter Verdeckungen, Wahrnehmung von Objekteinheit und das Erlernen von Reizfolgen werden in einem vereinheitlichenden, theoretischen Rahmen modelliert, während die Nähe zu experimentellen Ergebnissen der Entwicklungspsychologie im Säuglingsalter gewahrt wird
    corecore