282 research outputs found

    Depth adaptive zooming visual servoing for a robot with a zooming camera

    Full text link
    To solve the view visibility problem and keep the observed object in the field of view (FOV) during the visual servoing, a depth adaptive zooming visual servoing strategy for a manipulator robot with a zooming camera is proposed. Firstly, a zoom control mechanism is introduced into the robot visual servoing system. It can dynamically adjust the camera's field of view to keep all the feature points on the object in the field of view of the camera and get high object local resolution at the end of visual servoing. Secondly, an invariant visual servoing method is employed to control the robot to the desired position under the changing intrinsic parameters of the camera. Finally, a nonlinear depth adaptive estimation scheme in the invariant space using Lyapunov stability theory is proposed to estimate adaptively the depth of the image features on the object. Three kinds of robot 4DOF visual positioning simulation experiments are conducted. The simulation experiment results show that the proposed approach has higher positioning precision. © 2013 Xin et al

    Visual Servoing for UAVs

    Get PDF

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors

    Performance limitation analysis in visual servo systems: Bounding the location error introduced by image points matching

    Get PDF
    Visual servoing consists of positioning a robot endeffector based on the matching of some object features in the image. However, due to the presence of image noise, this matching can never be ensured, hence introducing an error on the final location of the robot. This paper addresses the problem of estimating the worst-case location error introduced by image points matching. In particular, we propose some strategies for computing upper bounds and lower bounds of such an error according to several possible measures for certain image noise intensity and camera-object configuration. These bounds provide an admissible region of the sought worst-case location error, and hence allow one to establish performance limitation of visual servo systems. Some examples are reported to illustrate the proposed strategies and their results. © 2009 IEEE.published_or_final_versio

    An Introduction to Model-Based Pose Estimation and 3-D Tracking Techniques

    Get PDF

    Visual Servoing

    Get PDF
    International audienceThis chapter introduces visual servo control, using computer vision data in the servo loop to control the motion of a robot. We first describe the basic techniques that are by now well established in the field. We give a general overview of the formulation of the visual servo control problem, and describe the two archetypal visual servo control schemes: image-based and pose-based visual servo control. We then discuss performance and stability issues that pertain to these two schemes, motivating advanced techniques. Of the many advanced techniques that have been developed , we discuss 2.5-D, hybrid, partitioned, and switched approaches. Having covered a variety of control schemes, we deal with target tracking and controlling motion directly in the joint space and extensions to under-actuated ground and aerial robots. We conclude by describing applications of visual ser-voing in robotics
    • 

    corecore