679 research outputs found

    Immersive Teleoperation of the Eye Gaze of Social Robots Assessing Gaze-Contingent Control of Vergence, Yaw and Pitch of Robotic Eyes

    Get PDF
    International audienceThis paper presents a new teleoperation system – called stereo gaze-contingent steering (SGCS) – able to seamlessly control the vergence, yaw and pitch of the eyes of a humanoid robot – here an iCub robot – from the actual gaze direction of a remote pilot. The video stream captured by the cameras embedded in the mobile eyes of the iCub are fed into an HTC Vive R Head-Mounted Display equipped with an SMI R binocular eye-tracker. The SGCS achieves the effective coupling between the eye-tracked gaze of the pilot and the robot's eye movements. SGCS both ensures a faithful reproduction of the pilot's eye movements – that is perquisite for the readability of the robot's gaze patterns by its interlocutor – and maintains the pilot's oculomotor visual clues – that avoids fatigue and sickness due to sensorimotor conflicts. We here assess the precision of this servo-control by asking several pilots to gaze towards known objects positioned in the remote environment. We demonstrate that we succeed in controlling vergence with similar precision as eyes' azimuth and elevation. This system opens the way for robot-mediated human interactions in the personal space, notably when objects in the shared working space are involved

    Motor Eyes: Mechanical Platform for a Binocular Robotic Vision System

    Get PDF
    Stereoscopic vision systems require high computational power to perform image processing for 3D reconstruction of a scene. Synchronizing eye movements through mechanical coupling can reduce this processing power. To investigate this potential, the project team developed a mechanical platform for a binocular robotic vision system that uses stepper motors and slider linkages to achieve coupled pan, coupled tilt and coupled vergence eye movements. A prototype, controlled by an Arduino Uno, was constructed. The prototype achieved eye rotation speeds comparable to human saccadic eye motion and was capable of focusing on specified points with some position error caused by the prototype’s high sensitivity to misalignments of mechanical parts

    Motor Eyes: Mechanical Platform for a Binocular Robotic Vision System

    Get PDF
    Stereoscopic vision systems require high computational power to perform image processing for 3D reconstruction of a scene. Synchronizing eye movements through mechanical coupling can reduce this processing power. To investigate this potential, the project team developed a mechanical platform for a binocular robotic vision system that uses stepper motors and slider linkages to achieve coupled pan, coupled tilt and coupled vergence eye movements. A prototype, controlled by an Arduino Uno, was constructed. The prototype achieved eye rotation speeds comparable to human saccadic eye motion and was capable of focusing on specified points with some position error caused by the prototype’s high sensitivity to misalignments of mechanical parts

    Motor Eyes: Mechanical Platform for a Binocular Robotic Vision System

    Get PDF
    Stereoscopic vision systems require high computational power to perform image processing for 3D reconstruction of a scene. Synchronizing eye movements through mechanical coupling can reduce this processing power. To investigate this potential, the project team developed a mechanical platform for a binocular robotic vision system that uses stepper motors and slider linkages to achieve coupled pan, coupled tilt and coupled vergence eye movements. A prototype, controlled by an Arduino Uno, was constructed. The prototype achieved eye rotation speeds comparable to human saccadic eye motion and was capable of focusing on specified points with some position error caused by the prototype’s high sensitivity to misalignments of mechanical parts

    Deep into the Eyes: Applying Machine Learning to improve Eye-Tracking

    Get PDF
    Eye-tracking has been an active research area with applications in personal and behav- ioral studies, medical diagnosis, virtual reality, and mixed reality applications. Improving the robustness, generalizability, accuracy, and precision of eye-trackers while maintaining privacy is crucial. Unfortunately, many existing low-cost portable commercial eye trackers suffer from signal artifacts and a low signal-to-noise ratio. These trackers are highly depen- dent on low-level features such as pupil edges or diffused bright spots in order to precisely localize the pupil and corneal reflection. As a result, they are not reliable for studying eye movements that require high precision, such as microsaccades, smooth pursuit, and ver- gence. Additionally, these methods suffer from reflective artifacts, occlusion of the pupil boundary by the eyelid and often require a manual update of person-dependent parame- ters to identify the pupil region. In this dissertation, I demonstrate (I) a new method to improve precision while maintaining the accuracy of head-fixed eye trackers by combin- ing velocity information from iris textures across frames with position information, (II) a generalized semantic segmentation framework for identifying eye regions with a further extension to identify ellipse fits on the pupil and iris, (III) a data-driven rendering pipeline to generate a temporally contiguous synthetic dataset for use in many eye-tracking ap- plications, and (IV) a novel strategy to preserve privacy in eye videos captured as part of the eye-tracking process. My work also provides the foundation for future research by addressing critical questions like the suitability of using synthetic datasets to improve eye-tracking performance in real-world applications, and ways to improve the precision of future commercial eye trackers with improved camera specifications

    Fusion of Imaging and Inertial Sensors for Navigation

    Get PDF
    The motivation of this research is to address the limitations of satellite-based navigation by fusing imaging and inertial systems. The research begins by rigorously describing the imaging and navigation problem and developing practical models of the sensors, then presenting a transformation technique to detect features within an image. Given a set of features, a statistical feature projection technique is developed which utilizes inertial measurements to predict vectors in the feature space between images. This coupling of the imaging and inertial sensors at a deep level is then used to aid the statistical feature matching function. The feature matches and inertial measurements are then used to estimate the navigation trajectory using an extended Kalman filter. After accomplishing a proper calibration, the image-aided inertial navigation algorithm is then tested using a combination of simulation and ground tests using both tactical and consumer- grade inertial sensors. While limitations of the Kalman filter are identified, the experimental results demonstrate a navigation performance improvement of at least two orders of magnitude over the respective inertial-only solutions

    Development and Validation of a Hybrid Virtual/Physical Nuss Procedure Surgical Trainer

    Get PDF
    With continuous advancements and adoption of minimally invasive surgery, proficiency with nontrivial surgical skills involved is becoming a greater concern. Consequently, the use of surgical simulation has been increasingly embraced by many for training and skill transfer purposes. Some systems utilize haptic feedback within a high-fidelity anatomically-correct virtual environment whereas others use manikins, synthetic components, or box trainers to mimic primary components of a corresponding procedure. Surgical simulation development for some minimally invasive procedures is still, however, suboptimal or otherwise embryonic. This is true for the Nuss procedure, which is a minimally invasive surgery for correcting pectus excavatum (PE) – a congenital chest wall deformity. This work aims to address this gap by exploring the challenges of developing both a purely virtual and a purely physical simulation platform of the Nuss procedure and their implications in a training context. This work then describes the development of a hybrid mixed-reality system that integrates virtual and physical constituents as well as an augmentation of the haptic interface, to carry out a reproduction of the primary steps of the Nuss procedure and satisfy clinically relevant prerequisites for its training platform. Furthermore, this work carries out a user study to investigate the system’s face, content, and construct validity to establish its faithfulness as a training platform

    Mouse visual cortex contains a region of enhanced spatial resolution.

    Get PDF
    The representation of space in mouse visual cortex was thought to be relatively uniform. Here we reveal, using population receptive-field (pRF) mapping techniques, that mouse visual cortex contains a region in which pRFs are considerably smaller. This region, the "focea," represents a location in space in front of, and slightly above, the mouse. Using two-photon imaging we show that the smaller pRFs are due to lower scatter of receptive-fields at the focea and an over-representation of binocular regions of space. We show that receptive-fields of single-neurons in areas LM and AL are smaller at the focea and that mice have improved visual resolution in this region of space. Furthermore, freely moving mice make compensatory eye-movements to hold this region in front of them. Our results indicate that mice have spatial biases in their visual processing, a finding that has important implications for the use of the mouse model of vision

    Piloted aircraft simulation concepts and overview

    Get PDF
    An overview of piloted aircraft simulation is presented that reflects the viewpoint of an aeronautical technologist. The intent is to acquaint potential users with some of the basic concepts and issues that characterize piloted simulation. Application to the development of aircraft are highlighted, but some aspects of training simulators are covered. A historical review is given together with a description of some current simulators. Simulator usages, advantages, and limitations are discussed and human perception qualities important to simulation are related. An assessment of current simulation is presented that addresses validity, fidelity, and deficiencies. Future prospects are discussed and technology projections are made

    Immersive Teleoperation of the Eye Gaze of Social Robots Assessing Gaze-Contingent Control of Vergence, Yaw and Pitch of Robotic Eyes

    Get PDF
    International audienceThis paper presents a new teleoperation system – called stereo gaze-contingent steering (SGCS) – able to seamlessly control the vergence, yaw and pitch of the eyes of a humanoid robot – here an iCub robot – from the actual gaze direction of a remote pilot. The video stream captured by the cameras embedded in the mobile eyes of the iCub are fed into an HTC Vive R Head-Mounted Display equipped with an SMI R binocular eye-tracker. The SGCS achieves the effective coupling between the eye-tracked gaze of the pilot and the robot's eye movements. SGCS both ensures a faithful reproduction of the pilot's eye movements – that is perquisite for the readability of the robot's gaze patterns by its interlocutor – and maintains the pilot's oculomotor visual clues – that avoids fatigue and sickness due to sensorimotor conflicts. We here assess the precision of this servo-control by asking several pilots to gaze towards known objects positioned in the remote environment. We demonstrate that we succeed in controlling vergence with similar precision as eyes' azimuth and elevation. This system opens the way for robot-mediated human interactions in the personal space, notably when objects in the shared working space are involved
    • …
    corecore