1,065 research outputs found

    Visuomotor Adaptation Without Vision?

    Get PDF
    In 1995, an aftereffect following treadmill running was described, in which people would inadvertently advance when attempting to run in place on solid ground with their eyes closed. Although originally induced from treadmill running, the running-in-place aftereffect is argued here to result from the absence of sensory information specifying advancement during running. In a series of experiments in which visual information was systematically manipulated, aftereffect strength (AE), measured as the proportional increase (post-test/pre-test) in forward drift while attempting to run in place with eyes closed, was found to be inversely related to the amount of geometrically correct optical flow provided during induction. In particular, experiment 1 (n=20) demonstrated that the same aftereffect was not limited to treadmill running, but could also be strongly generated by running behind a golf-cart when the eyes were closed (AE=1.93), but not when the eyes were open (AE=1.16). Conversely, experiment 2 (n=39) showed that simulating an expanding flow field, albeit crudely, during treadmill running was insufficient to eliminate the aftereffect. Reducing ambient auditory information by means of earplugs increased the total distances inadvertently advanced while attempting to run in one place by a factor of two, both before and after adaptation, but did not influence the ratio of change produced by adaptation. It is concluded that the running-in-place aftereffect may result from a recalibration of visuomotor control systems that takes place even in the absence of visual input

    Perception Of Visual Speed While Moving

    Get PDF
    During self-motion, the world normally appears stationary. In part, this may be due to reductions in visual motion signals during self-motion. In 8 experiments, the authors used magnitude estimation to characterize changes in visual speed perception as a result of biomechanical self-motion alone (treadmill walking), physical translation alone (passive transport), and both biomechanical self-motion and physical translation together (walking). Their results show that each factor alone produces subtractive reductions in visual speed but that subtraction is greatest with both factors together, approximating the sum of the 2 separately. The similarity of results for biomechanical and passive self-motion support H. B. Barlow\u27s (1990) inhibition theory of sensory correlation as a mechanism for implementing H. Wallach\u27s (1987) compensation for self-motion. (PsycINFO Database Record (c) 2013 APA, all rights reserved)(journal abstract

    An affordable surround-screen virtual reality display

    Get PDF
    Building a projection-based virtual reality display is a time, cost, and resource intensive enterprise andmany details contribute to the final display quality. This is especially true for surround-screen displays wheremost of them are one-of-a-kind systems or custom-made installations with specialized projectors, framing, andprojection screens. In general, the costs of acquiring these types of systems have been in the hundreds and evenmillions of dollars, specifically for those supporting synchronized stereoscopic projection across multiple screens.Furthermore, the maintenance of such systems adds an additional recurrent cost, which makes them hard to affordfor a general introduction in a wider range of industry, academic, and research communities.We present a low-cost, easy to maintain surround-screen design based on off-the-shelf affordable componentsfor the projection screens, framing, and display system. The resulting system quality is comparable to significantlymore expensive commercially available solutions. Additionally, users with average knowledge can implement ourdesign and it has the added advantage that single components can be individually upgraded based on necessity aswell as available funds

    Adaptation To Conflicting Visual And Physical Heading Directions During Walking

    Get PDF
    We investigated the role of global optic flow for visual–motor adaptation of walking direction. In an immersive virtual environment, observers walked to a circular target lying on either a homogeneous ground plane (target-motion condition) or a textured ground plane (ground-flow condition). During adaptation trials, we changed the mapping from physical to visual space to create a conflict between physical and visual heading directions. On these trials, the visual heading specified by optic flow deviated from an observer\u27s physical heading by ±10°. This conflict was not noticed by observers but caused them to walk along curved paths to the target. Over the course of 20 adaptation trials, observers adapted to partially compensate for the conflicts, resulting in straighter paths. When the conflicts were removed post-adaptation, observers showed aftereffects in the opposite direction. The amount of adaptation was similar for target-motion and ground-flow conditions (20–25%), with the ground-flow environment producing slightly faster adaptation and larger aftereffects. We conclude that the visual–motor system can rapidly recalibrate the mapping from physical to visual heading and that this adaptation does not strongly depend on full-field optic flow

    Matching optical flow to motor speed in virtual reality while running on a treadmill

    Get PDF
    We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed–i.e., treadmill’s speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care

    Virtual Reality system for freely-moving rodents

    Get PDF
    Spatial navigation, active sensing, and most cognitive functions rely on a tight link between motor output and sensory input. Virtual reality (VR) systems simulate the sensorimotor loop, allowing flexible manipulation of enriched sensory input. Conventional rodent VR systems provide 3D visual cues linked to restrained locomotion on a treadmill, leading to a mismatch between visual and most other sensory inputs, sensory-motor conflicts, as well as restricted naturalistic behavior. To rectify these limitations, we developed a VR system (ratCAVE) that provides realistic and low-latency visual feedback directly to head movements of completely unrestrained rodents. Immersed in this VR system, rats displayed naturalistic behavior by spontaneously interacting with and hugging virtual walls, exploring virtual objects, and avoiding virtual cliffs. We further illustrate the effect of ratCAVE-VR manipulation on hippocampal place fields. The newly-developed methodology enables a wide range of experiments involving flexible manipulation of visual feedback in freely-moving behaving animals

    Towards Naturalistic Interfaces of Virtual Reality Systems

    Get PDF
    Interaction plays a key role in achieving realistic experience in virtual reality (VR). Its realization depends on interpreting the intents of human motions to give inputs to VR systems. Thus, understanding human motion from the computational perspective is essential to the design of naturalistic interfaces for VR. This dissertation studied three types of human motions, including locomotion (walking), head motion and hand motion in the context of VR. For locomotion, the dissertation presented a machine learning approach for developing a mechanical repositioning technique based on a 1-D treadmill for interacting with a unique new large-scale projective display, called the Wide-Field Immersive Stereoscopic Environment (WISE). The usability of the proposed approach was assessed through a novel user study that asked participants to pursue a rolling ball at variable speed in a virtual scene. In addition, the dissertation studied the role of stereopsis in avoiding virtual obstacles while walking by asking participants to step over obstacles and gaps under both stereoscopic and non-stereoscopic viewing conditions in VR experiments. In terms of head motion, the dissertation presented a head gesture interface for interaction in VR that recognizes real-time head gestures on head-mounted displays (HMDs) using Cascaded Hidden Markov Models. Two experiments were conducted to evaluate the proposed approach. The first assessed its offline classification performance while the second estimated the latency of the algorithm to recognize head gestures. The dissertation also conducted a user study that investigated the effects of visual and control latency on teleoperation of a quadcopter using head motion tracked by a head-mounted display. As part of the study, a method for objectively estimating the end-to-end latency in HMDs was presented. For hand motion, the dissertation presented an approach that recognizes dynamic hand gestures to implement a hand gesture interface for VR based on a static head gesture recognition algorithm. The proposed algorithm was evaluated offline in terms of its classification performance. A user study was conducted to compare the performance and the usability of the head gesture interface, the hand gesture interface and a conventional gamepad interface for answering Yes/No questions in VR. Overall, the dissertation has two main contributions towards the improvement of naturalism of interaction in VR systems. Firstly, the interaction techniques presented in the dissertation can be directly integrated into existing VR systems offering more choices for interaction to end users of VR technology. Secondly, the results of the user studies of the presented VR interfaces in the dissertation also serve as guidelines to VR researchers and engineers for designing future VR systems
    • …
    corecore