183 research outputs found

    Conferring robustness to path-planning for image-based control

    Get PDF
    Path-planning has been proposed in visual servoing for reaching the desired location while fulfilling various constraints. Unfortunately, the real trajectory can be significantly different from the reference trajectory due to the presence of uncertainties on the model used, with the consequence that some constraints may not be fulfilled hence leading to a failure of the visual servoing task. This paper proposes a new strategy for addressing this problem, where the idea consists of conferring robustness to the path-planning scheme by considering families of admissible models. In order to obtain these families, uncertainty in the form of random variables is introduced on the available image points and intrinsic parameters. Two families are considered, one by generating a given number of admissible models corresponding to extreme values of the uncertainty, and one by estimating the extreme values of the components of the admissible models. Each model of these families identifies a reference trajectory, which is parametrized by design variables that are common to all the models. The design variables are hence determined by imposing that all the reference trajectories fulfill the required constraints. Discussions on the convergence and robustness of the proposed strategy are provided, in particular showing that the satisfaction of the visibility and workspace constraints for the second family ensures the satisfaction of these constraints for all models bounded by this family. The proposed strategy is illustrated through simulations and experiments. © 2011 IEEE.published_or_final_versio

    Designing image trajectories in the presence of uncertain data for robust visual servoing path-planning

    Get PDF
    Path-planning allows one to steer a camera to a desired location while taking into account the presence of constraints such as visibility, workspace, and joint limits. Unfortunately, the planned path can be significantly different from the real path due to the presence of uncertainty on the available data, with the consequence that some constraints may be not fulfilled by the real path even if they are satisfied by the planned path. In this paper we address the problem of performing robust path-planning, i.e. computing a path that satisfies the required constraints not only for the nominal model as in traditional path-planning but rather for a family of admissible models. Specifically, we consider an uncertain model where the point correspondences between the initial and desired views and the camera intrinsic parameters are affected by unknown random uncertainties with known bounds. The difficulty we have to face is that traditional path-planning schemes applied to different models lead to different paths rather than to a common and robust path. To solve this problem we propose a technique based on polynomial optimization where the required constraints are imposed on a number of trajectories corresponding to admissible camera poses and parameterized by a common design variable. The planned image trajectory is then followed by using an IBVS controller. Simulations carried out with all typical uncertainties that characterize a real experiment illustrate the proposed strategy and provide promising results. © 2009 IEEE.published_or_final_versio

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors

    Depth adaptive zooming visual servoing for a robot with a zooming camera

    Full text link
    To solve the view visibility problem and keep the observed object in the field of view (FOV) during the visual servoing, a depth adaptive zooming visual servoing strategy for a manipulator robot with a zooming camera is proposed. Firstly, a zoom control mechanism is introduced into the robot visual servoing system. It can dynamically adjust the camera's field of view to keep all the feature points on the object in the field of view of the camera and get high object local resolution at the end of visual servoing. Secondly, an invariant visual servoing method is employed to control the robot to the desired position under the changing intrinsic parameters of the camera. Finally, a nonlinear depth adaptive estimation scheme in the invariant space using Lyapunov stability theory is proposed to estimate adaptively the depth of the image features on the object. Three kinds of robot 4DOF visual positioning simulation experiments are conducted. The simulation experiment results show that the proposed approach has higher positioning precision. © 2013 Xin et al

    Quoi de neuf en asservissement visuel depuis les JNRR'03 ?

    Get PDF
    National audienceCet article de synthÚse présente les avancées réalisées en France au cours de ces quatre derniÚres années dans le domaine de l'asservissement visuel

    Performance limitation analysis in visual servo systems: Bounding the location error introduced by image points matching

    Get PDF
    Visual servoing consists of positioning a robot endeffector based on the matching of some object features in the image. However, due to the presence of image noise, this matching can never be ensured, hence introducing an error on the final location of the robot. This paper addresses the problem of estimating the worst-case location error introduced by image points matching. In particular, we propose some strategies for computing upper bounds and lower bounds of such an error according to several possible measures for certain image noise intensity and camera-object configuration. These bounds provide an admissible region of the sought worst-case location error, and hence allow one to establish performance limitation of visual servo systems. Some examples are reported to illustrate the proposed strategies and their results. © 2009 IEEE.published_or_final_versio

    Safe cooperation between human operators and visually controlled industrial manipulators

    Get PDF
    Industrial tasks can be improved substantially by making humans and robots collaborate in the same workspace. The main goal of this chapter is the development of a human-robot interaction system which enables this collaboration and guarantees the safety of the human operator. This system is composed of two subsystems: the human tracking system and the robot control system. The human tracking system deals with the precise real-time localization of the human operator in the industrial environment. It is composed of two systems: an inertial motion capture system and an Ultra-WideBand localization system. The robot control system is based on visual servoing. A safety behaviour which stops the normal path tracking of the robot is performed when the robot and the human are too close. This safety behaviour has been implemented through a multi-threaded software architecture in order to share information between both systems. Thereby, the localization measurements obtained by the human tracking system are processed by the robot control system to compute the minimum human-robot distance and determine if the safety behaviour must be activated.Spanish Ministry of Science and Innovation and the Spanish Ministry of Education through the projects DPI2005-06222 and DPI2008-02647 and the grant AP2005-1458

    Visual Servoing

    Get PDF
    International audienceThis chapter introduces visual servo control, using computer vision data in the servo loop to control the motion of a robot. We first describe the basic techniques that are by now well established in the field. We give a general overview of the formulation of the visual servo control problem, and describe the two archetypal visual servo control schemes: image-based and pose-based visual servo control. We then discuss performance and stability issues that pertain to these two schemes, motivating advanced techniques. Of the many advanced techniques that have been developed , we discuss 2.5-D, hybrid, partitioned, and switched approaches. Having covered a variety of control schemes, we deal with target tracking and controlling motion directly in the joint space and extensions to under-actuated ground and aerial robots. We conclude by describing applications of visual ser-voing in robotics
    • 

    corecore