4 research outputs found

    Robust image-based visual servoing using invariant visual information

    Get PDF
    This paper deals with the use of invariant visual features for visual servoing. New features are proposed to control the 6 degrees of freedom of a robotic system with better linearizing properties and robustness to noise than the state of the art in image-based visual servoing. We show in this paper that by using these features the behavior of image-based visual servoing in task space can be significantly improved. Several experimental results are provided and validate our proposal

    Rotation Free Active Vision

    Get PDF
    International audience— Incremental Structure from Motion (SfM) algorithms require, in general, precise knowledge of the camera linear and angular velocities in the camera frame for estimating the 3D structure of the scene. Since an accurate measurement of the camera own motion may be a non-trivial task in several robotics applications (for instance when the camera is onboard a UAV), we propose in this paper an active SfM scheme fully independent from the camera angular velocity. This is achieved by considering, as visual features, some rotational invariants obtained from the projection of the perceived 3D points onto a virtual unitary sphere (unified camera model). This feature set is then exploited for designing a rotation-free active SfM algorithm able to optimize online the direction of the camera linear velocity for improving the convergence of the structure estimation task. As case study, we apply our framework to the depth estimation of a set of 3D points and discuss several simulations and experimental results for illustrating the approach

    Human factors issues in telerobotic decommissioning of legacy nuclear facilities

    Get PDF
    This thesis investigates the problems of enabling human workers to control remote robots, to achieve decommissioning of contaminated nuclear facilities, which are hazardous for human workers to enter. The mainstream robotics literature predominantly reports novel mechanisms and novel control algorithms. In contrast, this thesis proposes experimental methodologies for objectively evaluating the performance of both a robot and its remote human operator, when challenged with carrying out industrially relevant remote manipulation tasks. Initial experiments use a variety of metrics to evaluate the performance of human test-subjects. Results show that: conventional telemanipulation is extremely slow and difficult; metrics for usability of such technology can be conflicting and hard to interpret; aptitude for telemanipulation varies significantly between individuals; however such aptitude may be rendered predictable by using simple spatial awareness tests. Additional experiments suggest that autonomous robotics methods (e.g. vision-guided grasping) can significantly assist the operator. A novel approach to telemanipulation is proposed, in which an ``orbital camera`` enables the human operator to select arbitrary views of the scene, with the robot's motions transformed into the orbital view coordinate frame. This approach is useful for overcoming the severe depth perception problems of conventional fixed camera views. Finally, a novel computer vision algorithm is proposed for target tracking. Such an algorithm could be used to enable an unmanned aerial vehicle (UAV) to fixate on part of the workspace, e.g. a manipulated object, to provide the proposed orbital camera view
    corecore