16 research outputs found

    Visual stability across combined eye and body motion

    Get PDF
    Contains fulltext : 103072.pdf (publisher's version ) (Open Access)In order to maintain visual stability during self-motion, the brain needs to update any egocentric spatial representations of the environment. Here, we use a novel psychophysical approach to investigate how and to what extent the brain integrates visual, extraocular, and vestibular signals pertaining to this spatial update. Participants were oscillated sideways at a frequency of 0.63 Hz while keeping gaze fixed on a stationary light. When the motion direction changed, a reference target was shown either in front of or behind the fixation point. At the next reversal, half a cycle later, we tested updating of this reference location by asking participants to judge whether a briefly flashed probe was shown to the left or right of the memorized target. We show that updating is not only biased, but that the direction and magnitude of this bias depend on both gaze and object location, implying that a gaze-centered reference frame is involved. Using geometric modeling, we further show that the gaze-dependent errors can be caused by an underestimation of translation amplitude, by a bias of visually perceived objects towards the fovea (i.e., a foveal bias), or by a combination of both.11 p

    Backtracking: retrospective multi-target tracking

    No full text
    We introduce a multi-target tracking algorithm that operates on prerecorded video as typically found in post-incident surveillance camera investigation. Apart from being robust to visual challenges such as occlusion and variation in camera view, our algorithm is also robust to temporal challenges, in particular unknown variation in frame rate. The complication with variation in frame rate is that it invalidates motion estimation. As such, tracking algorithms based on motion models will show decreased performance. On the other hand, appearance based detection in individual frames suffers from a plethora of false detections. Our tracking algorithm, albeit relying on appearance based detection, deals robustly with the caveats of both approaches. The solution rests on the fact that for prerecorded video we can make fully informed choices; not only based on preceding, but also based on following frames. We start off from an appearance based object detection algorithm able to detect in each frame all target objects. From this we build a graph structure. The detections form the graph’s nodes and the vertices are formed by connecting each detection in a frame to all detections in the following frame. Thus, each path through the graph shows some particular selection of successive detections. Tracking is then reformulated as a heuristic search for optimal paths, where optimal means to find all detections belonging to a single object and excluding any other detection. We show that this approach, without an explicit motion model, is robust to both the visual and temporal challenges

    Reliability-based weighting of visual and vestibular cues in displacement estimation

    Get PDF
    Contains fulltext : 150540.pdf (publisher's version ) (Open Access)When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.15 p

    Causal inference for spatial constancy across saccades

    Get PDF
    Contains fulltext : 157148.pdf (publisher's version ) (Open Access)During saccadic eye movements, the image on our retinas is, contrary to subjective experience, highly unstable. This study examines how the brain distinguishes the image perturbations caused by saccades and those due to changes in the visual scene. We first show that participants made severe errors in judging the presaccadic location of an object that shifts during a saccade. We then show that these observations can be modeled based on causal inference principles, evaluating whether presaccadic and postsaccadic object percepts derive from a single stable object or not. On a single trial level, this evaluation is not ?either/or? but a probability that also determines the weight by which pre- and postsaccadic signals are separated and integrated in judging object locations across saccades.20 p

    Psychophysical evaluation of sensory reweighting in bilateral vestibulopathy

    Get PDF
    Contains fulltext : 191533.pdf (publisher's version ) (Open Access)Perception of spatial orientation is thought to rely on the brain's integration of visual, vestibular, proprioceptive and somatosensory signals, as well as internal beliefs. When one of these signals breaks down, such as the vestibular signal in bilateral vestibulopathy, patients start compensating by relying more on the remaining cues. How these signals are reweighted in this integration process is difficult to establish since they cannot be measured in isolation during natural tasks, are inherently noisy, and can be ambiguous or in conflict. Here, we review our recent work, combining experimental psychophysics with a reverse engineering approach, based on Bayesian inference principles, to quantify sensory noise levels and optimal (re)weighting at the individual subject level, in both patients with bilateral vestibular deficits and healthy controls. We show that these patients reweight the remaining sensory information, relying more on visual and other non-vestibular information than healthy controls in the perception of spatial orientation. This quantification approach could improve diagnostics and prognostics of multisensory integration deficits in vestibular patients, and contribute to an evaluation of rehabilitation therapies directed towards specific training programs.9 p

    Causal inference in the updating and weighting of allocentric and egocentric information for spatial constancy during whole-body motion

    No full text
    It has been reported that the brain combines egocentric and allocentric information to update object positions after an intervening movement. Studies typically use discrete updating tasks (i.e., comparing pre- to post-movement target representations). Such approaches, however, cannot reveal how the brain would weigh the information in these reference frames during the intervening motion. A reasonable assumption is that objects with stable position over time would be more likely to be considered as a reliable allocentric landmark. But inferring whether an object is stable in space while the observer is moving involves attributing perceived changes in location to either the object's or the observer's displacement. Here, we tested this causal inference hypothesis by designing a continuous whole-body motion updating task. At the beginning of a trial, a target was presented for 500 ms, within a large visual frame. As soon as the target disappeared, participants were asked to move a cursor to this location by controlling a linear-guide mounted on the vestibular sled on which they were seated. Participants were translated sideways as soon as their reaching movement started, and they had to maintain the cursor on the remembered target location in space while being moved. During the sled motion, the frame would move with a velocity proportional to that of the sled (gain ranging from -0.7 to 0.7). Participants' responses showed a systematic bias in the direction of the frame displacement, one that increased with the difference between the frame and the sled velocity for small differences, but was decreasing for large differences. This bias pattern provides evidence for humans exploiting a dynamic Bayesian inference process with two causal structures to mediate the dynamic integration of allocentric and egocentric information in spatial updating. Meeting abstract presented at VSS 2017

    Weighted visual and vestibular cues for spatial updating during passive self-motion

    No full text
    When walking or driving, it is of the utmost importance to continuously track the spatial relationship between objects in the environment and the moving body in order to prevent collisions. Although this process of spatial updating occurs naturally, it involves the processing of a myriad of noisy and ambiguous sensory signals. Here, using a psychometric approach, we investigated the integration of visual optic flow and vestibular cues in spatially updating a remembered target position during a linear displacement of the body. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They had to remember the position of a target, briefly presented before a sideward translation of the body involving supra-threshold vestibular cues and whole-field optic flow that provided slightly discrepant motion information. After the motion, using a forced response participants indicated whether the location of a brief visual probe was left or right of the remembered target position. Our results show that in a spatial updating task involving passive linear self-motion humans integrate optic flow and vestibular self-displacement information according to a weighted-averaging process with, across subjects, on average about four times as much weight assigned to the visual compared to the vestibular contribution (i.e., 79% visual weight). We discuss our findings with respect to previous literature on the effect of optic flow on spatial updating performance

    Time course of the subjective visual vertical during sustained optokinetic and galvanic vestibular stimulation

    No full text
    Item does not contain fulltextThe brain is thought to use rotation cues from both the vestibular and optokinetic system to disambiguate the gravito-inertial force, as measured by the otoliths, into components of linear acceleration and gravity direction relative to the head. Hence, when the head is stationary and upright, an erroneous percept of tilt arises during optokinetic roll stimulation (OKS) or when an artificial canal-like signal is delivered by means of galvanic vestibular stimulation (GVS). It is still unknown how this percept is affected by the combined presence of both cues or how it develops over time. Here, we measured the time course of the subjective visual vertical (SVV), as a proxy of perceived head tilt, in human participants (n = 16) exposed to constant-current GVS (1 and 2 mA, cathodal and anodal) and constant-velocity OKS (30°/s CW and CCW), or their combination. In each trial, participants continuously adjusted the orientation of a visual line, which drifted randomly, to Earth-vertical. We found that both GVS and OKS evoke an exponential time course of the SVV. These time courses have different amplitudes and different time constants, 4 s and 7 s respectively, and combine linearly when the two stimulations are presented together. We discuss these results in the framework of observer theory and Bayesian state estimation
    corecore