134 research outputs found

    Visual servoing sequencing able to avoid obstacles.

    Get PDF
    International audienceClassical visual servoing approaches tend to constrain all degrees of freedom (DOF) of the robot during the execution of a task. In this article a new approach is proposed. The key idea is to control the robot with a very under-constrained task when it is far from the desired position, and to incrementally constrain the global task by adding further tasks as the robot moves closer to the goal. As long as they are sufficient, the remaining DOF are used to avoid undesirable configurations, such as joint limits. Closer from the goal, when not enough DOF remain available for avoidance, an execution controller selects a task to be temporary removed from the applied tasks. The released DOF can then be used for the joint limits avoidance. A complete solution to implement this general idea is proposed. Experiments that prove the validity of the approach are also provided

    A Qualitative visual servoing: Application to the visibility constraint

    Get PDF
    International audienceThis paper describes an original control law called qualitative servoing. The particularity of this method is that no specific desired value is specified for the visual features involved in the control scheme. Indeed, visual features are only constrained to belong to a confident interval, which gives more flexibility to the system. While this formalism can be used for several types of visual features, it is used in this paper for improving the on-line control of the visibility of a target. The principle is to make a compromise between the classical positioning task and the visibility constraint. Experimental results obtained with a six degrees of freedom robot arm are presented, demonstrating the performance of the proposed method

    Control of Redundant Joint Structures Using Image Information During the Tracking of Non-Smooth Trajectories

    Get PDF
    Visual information is increasingly being used in a great number of applications in order to perform the guidance of joint structures. This paper proposes an image-based controller which allows the joint structure guidance when its number of degrees of freedom is greater than the required for the developed task. In this case, the controller solves the redundancy combining two different tasks: the primary task allows the correct guidance using image information, and the secondary task determines the most adequate joint structure posture solving the possible joint redundancy regarding the performed task in the image space. The method proposed to guide the joint structure also employs a smoothing Kalman filter not only to determine the moment when abrupt changes occur in the tracked trajectory, but also to estimate and compensate these changes using the proposed filter. Furthermore, a direct visual control approach is proposed which integrates the visual information provided by this smoothing Kalman filter. This last aspect permits the correct tracking when noisy measurements are obtained. All the contributions are integrated in an application which requires the tracking of the faces of Asperger children

    Visual servoing path planning for cameras obeying the unified model

    Get PDF
    This paper proposes a path planning visual servoing strategy for a class of cameras that includes conventional perspective cameras, fisheye cameras and catadioptric cameras as special cases. Specifically, these cameras are modeled by adopting a unified model recently proposed in the literature and the strategy consists of designing image trajectories for eye-in-hand robotic systems that allow the robot to reach a desired location while satisfying typical visual servoing constraints. To this end, the proposed strategy introduces the projection of the available image features onto a virtual plane and the computation of a feasible image trajectory through polynomial programming. Then, the computed image trajectory is tracked by using an image-based visual servoing controller. Experimental results with a fisheye camera mounted on a 6-d.o.f. robot arm are presented in order to illustrate the proposed strategy. © 2012 Copyright Taylor & Francis and The Robotics Society of Japan.postprin

    Robotic execution for everyday tasks by means of external vision/force control

    Get PDF
    In this article, we present an integrated manipulation framework for a service robot, that allows to interact with articulated objects at home environments through the coupling of vision and force modalities. We consider a robot which is observing simultaneously his hand and the object to manipulate, by using an external camera (i.e. robot head). Task-oriented grasping algorithms [1] are used in order to plan a suitable grasp on the object according to the task to perform. A new vision/force coupling approach [2], based on external control, is used in order to, first, guide the robot hand towards the grasp position and, second, perform the task taking into account external forces. The coupling between these two complementary sensor modalities provides the robot with robustness against uncertainties in models and positioning. A position-based visual servoing control law has been designed in order to continuously align the robot hand with respect to the object that is being manipulated, independently of camera position. This allows to freely move the camera while the task is being executed and makes this approach amenable to be integrated in current humanoid robots without the need of hand-eye calibration. Experimental results on a real robot interacting with different kind of doors are pre- sented

    The UJI librarian robot

    Get PDF
    This paper describes the UJI Librarian Robot, a mobile manipulator that is able to autonomously locate a book in an ordinary library, and grasp it from a bookshelf, by using eye-in-hand stereo vision and force sensing. The robot is only provided with the book code, a library map and some knowledge about its logical structure and takes advantage of the spatio-temporal constraints and regularities of the environment by applying disparate techniques such as stereo vision, visual tracking, probabilistic matching, motion estimation, multisensor-based grasping, visual servoing and hybrid control, in such a way that it exhibits a robust and dependable performance. The system has been tested, and experimental results show how it is able to robustly locate and grasp a book in a reasonable time without human intervention

    Treating Image Loss by Using the Vision/Motion Link:

    Get PDF

    Avoiding joint limits with a low-level fusion scheme

    Get PDF
    International audienceJoint limits avoidance is a crucial issue in sensor- based control. In this paper we propose an avoidance strategy based on a low-level data fusion. The joint positions of a robot arm are considered as features that are continuously added to the control scheme when they approach the joint limits, and removed when the position is safe. We expose an optimal tuning of the avoidance scheme, ensuring the main task is disturbed as little as possible. We propose additional strategies to solve the particular cases of unsafe desired position and local minima. The control scheme is applied to the avoidance of joint limits while performing visual servoing. Both simulation and experimental results illustrate the validity of our approach

    Visual servoing based mobile robot navigation able to deal with complete target loss

    Full text link
    International audienceThis paper combines the reactive collision avoidance methods with image-based visual servoing control for mobile robot navigation in an indoor environment. The proposed strategy allows the mobile robot to reach a desired position, described by a natural visual target, among unknown obstacles. While the robot avoids the obstacles, the camera could lose its target, which makes visual servoing fail. We propose in this paper a strategy to deal with the loss of visual features by taking advantage of the odometric data sensing. Obstacles are detected by the laser range finder and their boundaries are modeled using B-spline curves. We validate our strategy in a real experiment for an indoor mobile robot navigation in presence of obstacles

    Vision-based assistance for wheelchair navigation along corridors

    Get PDF
    International audienceIn case of motor impairments, steering a wheelchair can become a hazardous task. Typically, along corridors, joystick jerks induced by uncontrolled motions are source of wall collisions. This paper describes a vision based assistance solution for safe indoor semi-autonomous navigation purposes. To this aim, the control process is based on a visual servoing process designed for wall avoidance purposes. As the patient manually drives the wheelchair, a virtual guide is defined to progressively activate an automatic trajectory cor- rection. The proposed solution does not require any knowledge of the environment. Experiments have been conducted over corridors that present different configurations and illumination conditions. Results demonstrate the ability of the system to smoothly and adaptively assist people during their motions
    • …
    corecore