6,745 research outputs found

    On the Calibration of Active Binocular and RGBD Vision Systems for Dual-Arm Robots

    Get PDF
    This paper describes a camera and hand-eye calibration methodology for integrating an active binocular robot head within a dual-arm robot. For this purpose, we derive the forward kinematic model of our active robot head and describe our methodology for calibrating and integrating our robot head. This rigid calibration provides a closedform hand-to-eye solution. We then present an approach for updating dynamically camera external parameters for optimal 3D reconstruction that are the foundation for robotic tasks such as grasping and manipulating rigid and deformable objects. We show from experimental results that our robot head achieves an overall sub millimetre accuracy of less than 0.3 millimetres while recovering the 3D structure of a scene. In addition, we report a comparative study between current RGBD cameras and our active stereo head within two dual-arm robotic testbeds that demonstrates the accuracy and portability of our proposed methodology

    Glasgow's Stereo Image Database of Garments

    Full text link
    To provide insight into cloth perception and manipulation with an active binocular robotic vision system, we compiled a database of 80 stereo-pair colour images with corresponding horizontal and vertical disparity maps and mask annotations, for 3D garment point cloud rendering has been created and released. The stereo-image garment database is part of research conducted under the EU-FP7 Clothes Perception and Manipulation (CloPeMa) project and belongs to a wider database collection released through CloPeMa (www.clopema.eu). This database is based on 16 different off-the-shelve garments. Each garment has been imaged in five different pose configurations on the project's binocular robot head. A full copy of the database is made available for scientific research only at https://sites.google.com/site/ugstereodatabase/.Comment: 7 pages, 6 figure, image databas

    Towards binocular active vision in a robot head system

    Get PDF
    This paper presents the first results of an investigation and pilot study into an active, binocular vision system that combines binocular vergence, object recognition and attention control in a unified framework. The prototype developed is capable of identifying, targeting, verging on and recognizing objects in a highly-cluttered scene without the need for calibration or other knowledge of the camera geometry. This is achieved by implementing all image analysis in a symbolic space without creating explicit pixel-space maps. The system structure is based on the ‘searchlight metaphor’ of biological systems. We present results of a first pilot investigation that yield a maximum vergence error of 6.4 pixels, while seven of nine known objects were recognized in a high-cluttered environment. Finally a “stepping stone” visual search strategy was demonstrated, taking a total of 40 saccades to find two known objects in the workspace, neither of which appeared simultaneously within the Field of View resulting from any individual saccade

    Robust visual servoing in 3d reaching tasks

    Get PDF
    This paper describes a novel approach to the problem of reaching an object in space under visual guidance. The approach is characterized by a great robustness to calibration errors, such that virtually no calibration is required. Servoing is based on binocular vision: a continuous measure of the end-effector motion field, derived from real-time computation of the binocular optical flow over the stereo images, is compared with the actual position of the target and the relative error in the end-effector trajectory is continuously corrected. The paper outlines the general framework of the approach, shows how visual measures are obtained and discusses the synthesis of the controller along with its stability analysis. Real-time experiments are presented to show the applicability of the approach in real 3-D applications

    Active Estimation of Distance in a Robotic Vision System that Replicates Human Eye Movement

    Full text link
    Many visual cues, both binocular and monocular, provide 3D information. When an agent moves with respect to a scene, an important cue is the different motion of objects located at various distances. While a motion parallax is evident for large translations of the agent, in most head/eye systems a small parallax occurs also during rotations of the cameras. A similar parallax is present also in the human eye. During a relocation of gaze, the shift in the retinal projection of an object depends not only on the amplitude of the movement, but also on the distance of the object with respect to the observer. This study proposes a method for estimating distance on the basis of the parallax that emerges from rotations of a camera. A pan/tilt system specifically designed to reproduce the oculomotor parallax present in the human eye was used to replicate the oculomotor strategy by which humans scan visual scenes. We show that the oculomotor parallax provides accurate estimation of distance during sequences of eye movements. In a system that actively scans a visual scene, challenging tasks such as image segmentation and figure/ground segregation greatly benefit from this cue.National Science Foundation (BIC-0432104, CCF-0130851

    The Measurement of Eye Movements in Mild Traumatic Brain Injury: A Structured Review of an Emerging Area

    Get PDF
    Mild traumatic brain injury (mTBI), or concussion, occurs following a direct or indirect force to the head that causes a change in brain function. Many neurological signs and symptoms of mTBI can be subtle and transient, and some can persist beyond the usual recovery timeframe, such as balance, cognitive or sensory disturbance that may pre-dispose to further injury in the future. There is currently no accepted definition or diagnostic criteria for mTBI and therefore no single assessment has been developed or accepted as being able to identify those with an mTBI. Eye-movement assessment may be useful, as specific eye-movements and their metrics can be attributed to specific brain regions or functions, and eye-movement involves a multitude of brain regions. Recently, research has focused on quantitative eye-movement assessments using eye-tracking technology for diagnosis and monitoring symptoms of an mTBI. However, the approaches taken to objectively measure eye-movements varies with respect to instrumentation, protocols and recognition of factors that may influence results, such as cognitive function or basic visual function. This review aimed to examine previous work that has measured eye-movements within those with mTBI to inform the development of robust or standardized testing protocols. Medline/PubMed, CINAHL, PsychInfo and Scopus databases were searched. Twenty-two articles met inclusion/exclusion criteria and were reviewed, which examined saccades, smooth pursuits, fixations and nystagmus in mTBI compared to controls. Current methodologies for data collection, analysis and interpretation from eye-tracking technology in individuals following an mTBI are discussed. In brief, a wide range of eye-movement instruments and outcome measures were reported, but validity and reliability of devices and metrics were insufficiently reported across studies. Interpretation of outcomes was complicated by poor study reporting of demographics, mTBI-related features (e.g., time since injury), and few studies considered the influence that cognitive or visual functions may have on eye-movements. The reviewed evidence suggests that eye-movements are impaired in mTBI, but future research is required to accurately and robustly establish findings. Standardization and reporting of eye-movement instruments, data collection procedures, processing algorithms and analysis methods are required. Recommendations also include comprehensive reporting of demographics, mTBI-related features, and confounding variables

    An automated calibration method for non-see-through head mounted displays

    Get PDF
    Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone, and are often limited to optical see-through HMDs. Building on our existing approach to HMD calibration Gilson et al. (2008), we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside a HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in multiple positions. The centroids of the markers on the calibration object are recovered and their locations re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the HMD display's intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors without the need for error-prone human judgements

    EyePACT: eye-based parallax correction on touch-enabled interactive displays

    Get PDF
    The parallax effect describes the displacement between the perceived and detected touch locations on a touch-enabled surface. Parallax is a key usability challenge for interactive displays, particularly for those that require thick layers of glass between the screen and the touch surface to protect them from vandalism. To address this challenge, we present EyePACT, a method that compensates for input error caused by parallax on public displays. Our method uses a display-mounted depth camera to detect the user's 3D eye position in front of the display and the detected touch location to predict the perceived touch location on the surface. We evaluate our method in two user studies in terms of parallax correction performance as well as multi-user support. Our evaluations demonstrate that EyePACT (1) significantly improves accuracy even with varying gap distances between the touch surface and the display, (2) adapts to different levels of parallax by resulting in significantly larger corrections with larger gap distances, and (3) maintains a significantly large distance between two users' fingers when interacting with the same object. These findings are promising for the development of future parallax-free interactive displays

    Eye movement control during visual pursuit in Parkinson's disease

    Get PDF
    BACKGROUND: Prior studies of oculomotor function in Parkinson’s disease (PD) have either focused on saccades without considering smooth pursuit, or tested smooth pursuit while excluding saccades. The present study investigated the control of saccadic eye movements during pursuit tasksand assessed the quality of binocular coordinationas potential sensitive markers of PD. METHODS: Observers fixated on a central cross while a target moved toward it. Once the target reached the fixation cross, observers began to pursue the moving target. To further investigate binocular coordination, the moving target was presented on both eyes (binocular condition), or on one eye only (dichoptic condition). RESULTS: The PD group made more saccades than age-matched normal control adults (NC) both during fixation and pursuit. The difference between left and right gaze positions increased over time during the pursuit period for PD but not for NC. The findings were not related to age, as NC and young-adult control group (YC) performed similarly on most of the eye movement measures, and were not correlated with classical measures of PD severity (e.g., Unified Parkinson’s Disease Rating Scale (UPDRS) score). DISCUSSION: Our results suggest that PD may be associated with impairment not only in saccade inhibition, but also in binocular coordination during pursuit, and these aspects of dysfunction may be useful in PD diagnosis or tracking of disease course.This work was supported in part by grants from the National Science Foundation (NSF SBE-0354378 to Arash Yazdanbakhsh and Bo Cao) and Office of Naval Research (ONR N00014-11-1-0535 to Bo Cao, Chia-Chien Wu, and Arash Yazdanbakhsh). There was no additional external funding received for this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. (SBE-0354378 - National Science Foundation (NSF); ONR N00014-11-1-0535 - Office of Naval Research)Published versio
    • 

    corecore