1,317 research outputs found

    Seeing the invisible: The scope and limits of unconscious processing in binocular rivalry

    Get PDF
    When an image is presented to one eye and a very different image is presented to the corresponding location of the other eye, they compete for conscious representation, such that only one image is visible at a time while the other is suppressed. Called binocular rivalry, this phenomenon and its deviants have been extensively exploited to study the mechanism and neural correlates of consciousness. In this paper, we propose a framework, the unconscious binding hypothesis, to distinguish unconscious processing from conscious processing. According to this framework, the unconscious mind not only encodes individual features but also temporally binds distributed features to give rise to cortical representation, but unlike conscious binding, such unconscious binding is fragile. Under this framework, we review evidence from psychophysical and neuroimaging studies, which suggests that: (1) for invisible low level features, prolonged exposure to visual pattern and simple translational motion can alter the appearance of subsequent visible features (i.e. adaptation); for invisible high level features, although complex spiral motion cannot produce adaptation, nor can objects/words enhance subsequent processing of related stimuli (i.e. priming), images of tools can nevertheless activate the dorsal pathway; and (2) although invisible central cues cannot orient attention, invisible erotic pictures in the periphery can nevertheless guide attention, likely through emotional arousal; reciprocally, the processing of invisible information can be modulated by attention at perceptual and neural levels

    Motion Aftereffects Due to Interocular Summation of Adaptation to Linear Motion

    Full text link
    The motion aftereffect (MAE) can be elicited by adapting observers to global motion before they view a display containing no global motion. Experiments y others have shown that if the left eye of an observer is adapted to motion going in one direction, no MAE is reported during binocular testing. The present study investigated whether no binocular adaption had occured because the monocular motion signals cancelled each other during testing. Observers were adapted to different, but not quite opposite, directions of motion in the two eyes. Either both eyes, the left eye, ot the right eye were tested. observers reported the direction of perceived motion during the test. When they saw the test stimulus with both eyes, observers reported seeing motion in the opposite direction of the vectorial sum of the adaption directions. in the monocular test conditions observers reported MAW directions about halfway between their binocluar report and the direction opposite the corresponding monocular adaptaion directions, indicating that both monocular and binocular sites had adapted. A decomposition of the observed MAEs based on two strictly monocular and one binoclar representation of motion adaptation can account for the data.Air Force Office of Scientific Research (F49620-92-J-0225, F49620-92-J-0334, F49620-92-J-0334); Northeast Consortium for Engineering Education (NCEE A303-21-93); Office of Naval Research (N00014-91-J-4100, N00014-94-1-0597

    Motion transparency : depth ordering and smooth pursuit eye movements

    Get PDF
    When two overlapping, transparent surfaces move in different directions, there is ambiguity with respect to the depth ordering of the surfaces. Little is known about the surface features that are used to resolve this ambiguity. Here, we investigated the influence of different surface features on the perceived depth order and the direction of smooth pursuit eye movements. Surfaces containing more dots, moving opposite to an adapted direction, moving at a slower speed, or moving in the same direction as the eyes were more likely to be seen in the back. Smooth pursuit eye movements showed an initial preference for surfaces containing more dots, moving in a non-adapted direction, moving at a faster speed, and being composed of larger dots. After 300 to 500 ms, smooth pursuit eye movements adjusted to perception and followed the surface whose direction had to be indicated. The differences between perceived depth order and initial pursuit preferences and the slow adjustment of pursuit indicate that perceived depth order is not determined solely by the eye movements. The common effect of dot number and motion adaptation suggests that global motion strength can induce a bias to perceive the stronger motion in the back

    Motion adaptation and attention: A critical review and meta-analysis

    Get PDF
    The motion aftereffect (MAE) provides a behavioural probe into the mechanisms underlying motion perception, and has been used to study the effects of attention on motion processing. Visual attention can enhance detection and discrimination of selected visual signals. However, the relationship between attention and motion processing remains contentious: not all studies find that attention increases MAEs. Our meta-analysis reveals several factors that explain superficially discrepant findings. Across studies (37 independent samples, 76 effects) motion adaptation was significantly and substantially enhanced by attention (Cohen's d = 1.12, p < .0001). The effect more than doubled when adapting to translating (vs. expanding or rotating) motion. Other factors affecting the attention-MAE relationship included stimulus size, eccentricity and speed. By considering these behavioural analyses alongside neurophysiological work, we conclude that feature-based (rather than spatial, or object-based) attention is the biggest driver of sensory adaptation. Comparisons between naïve and non-naïve observers, different response paradigms, and assessment of 'file-drawer effects' indicate that neither response bias nor publication bias are likely to have significantly inflated the estimated effect of attention

    Adaptation and aftereffects in the visual system

    Get PDF
    This thesis is concerned with the investigation of the nature of adaptation and aftereffects in the human visual system. We extend previous research first by specifically investigating the temporal aspect of these processes. The technique we develop and present here offers a method of measuring the temporal dynamics of visual aftereffects which captures how the aftereffect is varying in both strength and duration. In the first experimental chapter we present data following the application of this technique to the Depth After Effect. We then go on to apply this technique to the investigation of the Motion After Effect and in particular look at the temporal dynamics of this effect using different stimuli during adaptation. The results of this form the second and third experimental chapter of this thesis. Having addressed aspects of the nature of visual aftereffects to both motion and disparity, we then present an experiment looking at adaptation to both motion and disparity, and the effect this has on an ambiguous stimuli, that of a transparent surface. We found that observers' biases for which direction of motion moved in front was influenced in a manner mostly consistent with a depth-contingent motion aftereffect following adaptation. These results emphasize the critical role of neural structures sensitive to both motion and binocular disparity in the perception of motion transparency. In summary, this thesis addresses the nature of visual aftereffects and also presents a method of measuring how they vary with time

    Fractionally Predictive Spiking Neurons

    Full text link
    Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. Empirically, we find that the online approximation of signals with a sum of power-law kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spike-trains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel.Comment: 13 pages, 5 figures, in Advances in Neural Information Processing 201

    Shading and texture:Separate information channels with a common adaptation mechanism?

    Get PDF
    We outline a scheme for the way in which early vision may handle information about shading (luminance modulation, LM) and texture (contrast modulation, CM). Previous work on the detection of gratings has found no sub-threshold summation, and no cross-adaptation, between LM and CM patterns. This strongly implied separate channels for the detection of LM and CM structure. However, we now report experiments in which adapting to LM (or CM) gratings creates tilt aftereffects of similar magnitude on both LM and CM test gratings, and reduces the perceived strength (modulation depth) of LM and CM gratings to a similar extent. This transfer of aftereffects between LM and CM might suggest a second stage of processing at which LM and CM information is integrated. The nature of this integration, however, is unclear and several simple predictions are not fulfilled. Firstly, one might expect the integration stage to lose identity information about whether the pattern was LM or CM. We show instead that the identity of barely detectable LM and CM patterns is not lost. Secondly, when LM and CM gratings are combined in-phase or out-of-phase we find no evidence for cancellation, nor for 'phase-blindness'. These results suggest that information about LM and CM is not pooled or merged - shading is not confused with texture variation. We suggest that LM and CM signals are carried by separate channels, but they share a common adaptation mechanism that accounts for the almost complete transfer of perceptual aftereffects

    Optical versus video see-through mead-mounted displays in medical visualization

    Get PDF
    We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality research efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology
    corecore