5,494 research outputs found

    Short paper : Role of force-cues in path following of 3D trajectories in Virtual Reality.

    Get PDF
    International audienceThis paper examines the effect of adding haptic force cues (simulated inertia, compensation of gravity) during 3D-path following in large immersive virtual reality environments. Thirty-four participants were asked to follow a 3D ring-on-wire trajectory. The experiment consisted of one pre-test/control bloc of twelve trials with no haptic feedback; followed by three randomized blocs of twelve trials, where force feedbacks differed. Two levels of inertia were proposed and one level compensating the effect of gravity (No-gravity). In all blocks, participants received a real time visual warning feedback (color change), related to their spatial performance. Contrariwise to several psychophysics studies, haptic force cues did not significantly change the task performance in terms of time completion or spatial distance error. The participants however significantly reduced the time passed in the visual warning zone in the presence of haptic cues. Taken together, these results are discussed from a psychophysics and multi-sensory integration point of view

    Factors of Micromanipulation Accuracy and Learning

    No full text
    Micromanipulation refers to the manipulation under a microscope in order to perform delicate procedures. It is difficult for humans to manipulate objects accurately under a microscope due to tremor and imperfect perception, limiting performance. This project seeks to understand factors affecting accuracy in micromanipulation, and to propose strategies for learning improving accuracy. Psychomotor experiments were conducted using computer-controlled setups to determine how various feedback modalities and learning methods can influence micromanipulation performance. In a first experiment, static and motion accuracy of surgeons, medical students and non-medical students under different magniification levels and grip force settings were compared. A second experiment investigated whether the non-dominant hand placed close to the target can contribute to accurate pointing of the dominant hand. A third experiment tested a training strategy for micromanipulation using unstable dynamics to magnify motion error, a strategy shown to be decreasing deviation in large arm movements. Two virtual reality (VR) modules were then developed to train needle grasping and needle insertion tasks, two primitive tasks in a microsurgery suturing procedure. The modules provided the trainee with a visual display in stereoscopic view and information on their grip, tool position and angles. Using the VR module, a study examining effects of visual cues was conducted to train tool orientation. Results from these studies suggested that it is possible to learn and improve accuracy in micromanipulation using appropriate sensorimotor feedback and training

    Perceptual judgments of duration of parabolic motions

    Get PDF
    In a 2-alternative forced-choice protocol, observers judged the duration of ball motions shown on an immersive virtual-reality display as approaching in the sagittal plane along parabolic trajectories compatible with Earth gravity effects. In different trials, the ball shifted along the parabolas with one of three different laws of motion: constant tangential velocity, constant vertical velocity, or gravitational acceleration. Only the latter motion was fully consistent with Newton's laws in the Earth gravitational field, whereas the motions with constant velocity profiles obeyed the spatio-temporal constraint of parabolic paths dictated by gravity but violated the kinematic constraints. We found that the discrimination of duration was accurate and precise for all types of motions, but the discrimination for the trajectories at constant tangential velocity was slightly but significantly more precise than that for the trajectories at gravitational acceleration or constant vertical velocity. The results are compatible with a heuristic internal representation of gravity effects that can be engaged when viewing projectiles shifting along parabolic paths compatible with Earth gravity, irrespective of the specific kinematics. Opportunistic use of a moving frame attached to the target may favour visual tracking of targets with constant tangential velocity, accounting for the slightly superior duration discrimination

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624

    A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection

    Full text link
    A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    SOVEREIGN: An Autonomous Neural System for Incrementally Learning Planned Action Sequences to Navigate Towards a Rewarded Goal

    Full text link
    How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goaloriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and sizeinvariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences. Volitional signals gate interactions between model subsystems and the release of overt behaviors. The model can control different motor sequences under different motivational states and learns more efficient sequences to rewarded goals as exploration proceeds.Riverside Reserach Institute; Defense Advanced Research Projects Agency (N00014-92-J-4015); Air Force Office of Scientific Research (F49620-92-J-0225); National Science Foundation (IRI 90-24877, SBE-0345378); Office of Naval Research (N00014-92-J-1309, N00014-91-J-4100, N00014-01-1-0624, N00014-01-1-0624); Pacific Sierra Research (PSR 91-6075-2

    Towards Understanding and Developing Virtual Environments to Increase Accessibilities for People with Visual Impairments

    Get PDF
    The primary goal of this research is to investigate the possibilities of utilizing audio feedback to support effective Human-Computer Interaction Virtual Environments (VEs) without visual feedback for people with Visual Impairments. Efforts have been made to apply virtual reality (VR) technology for training and educational applications for diverse population groups, such as children and stroke patients. Those applications had already shown effects of increasing motivations, providing safer training environments and more training opportunities. However, they are all based on visual feedback. With the head related transfer functions (HRTFs), it is possible to design and develop considerably safer, but diversified training environments that might greatly benefit individuals with VI. In order to explore this, I ran three studies sequentially: 1) if/how users could navigate themselves with different types of 3D auditory feedback in the same VE; 2) if users could recognize the distance and direction of a virtual sound source in the virtual environment (VE) effectively; 3) if users could recognize the positions and distinguish the moving directions of 3D sound sources in the VE between the participants with and without VI. The results showed some possibilities of designing effective Human-Computer Interaction methods and some understandings of how the participants with VI experienced the scenarios differently than the participants without VI. Therefore, this research contributed new knowledge on how a visually impaired person interacts with computer interfaces, which can be used to derive guidelines for the design of effective VEs for rehabilitation and exercise

    Physics-based visual characterization of molecular interaction forces

    Get PDF
    Molecular simulations are used in many areas of biotechnology, such as drug design and enzyme engineering. Despite the development of automatic computational protocols, analysis of molecular interactions is still a major aspect where human comprehension and intuition are key to accelerate, analyze, and propose modifications to the molecule of interest. Most visualization algorithms help the users by providing an accurate depiction of the spatial arrangement: the atoms involved in inter-molecular contacts. There are few tools that provide visual information on the forces governing molecular docking. However, these tools, commonly restricted to close interaction between atoms, do not consider whole simulation paths, long-range distances and, importantly, do not provide visual cues for a quick and intuitive comprehension of the energy functions (modeling intermolecular interactions) involved. In this paper, we propose visualizations designed to enable the characterization of interaction forces by taking into account several relevant variables such as molecule-ligand distance and the energy function, which is essential to understand binding affinities. We put emphasis on mapping molecular docking paths obtained from Molecular Dynamics or Monte Carlo simulations, and provide time-dependent visualizations for different energy components and particle resolutions: atoms, groups or residues. The presented visualizations have the potential to support domain experts in a more efficient drug or enzyme design process.Peer ReviewedPostprint (author's final draft
    corecore