157 research outputs found

    Weakly Calibrated Stereoscopic Visual Servoing for Laser Steering: Application to Phonomicrosurgery.

    No full text
    International audienceThis paper deals with the study of a weakly calibrated multiview visual servoing control law for microrobotic laser phonomicrosurgery of the vocal folds. It consists of the development of an endoluminal surgery system for laserablation and resection of cancerous tissues. More specifically, this paper focuses on the part concerning the control of the laser spot displacement during surgical interventions. To perform this, a visual control law based on trifocal geometry is designed using two cameras and a laser source (virtual camera). The method is validated on a realistic testbench and the straight point-to-point trajectories are demonstrated

    2 1/2 D Visual servoing with respect to unknown objects through a new estimation scheme of camera displacement

    Get PDF
    Abstract. Classical visual servoing techniques need a strong a priori knowledge of the shape and the dimensions of the observed objects. In this paper, we present how the 2 1/2 D visual servoing scheme we have recently developed, can be used with unknown objects characterized by a set of points. Our scheme is based on the estimation of the camera displacement from two views, given by the current and desired images. Since vision-based robotics tasks generally necessitate to be performed at video rate, we focus only on linear algorithms. Classical linear methods are based on the computation of the essential matrix. In this paper, we propose a different method, based on the estimation of the homography matrix related to a virtual plane attached to the object. We show that our method provides a more stable estimation when the epipolar geometry degenerates. This is particularly important in visual servoing to obtain a stable control law, especially near the convergence of the system. Finally, experimental results confirm the improvement in the stability, robustness, and behaviour of our scheme with respect to classical methods. Keywords: visual servoing, projective geometry, homography 1

    Image Based Visual Servoing: Estimated Image Jacobian by Using Fundamental Matrix VS Analytic Jacobian

    Get PDF
    This paper describes a comparative study of performance between the estimated image Jacobian that come from taking into account the geometry epipolar of a system of two cameras, and the well known analytic image Jacobian that is utilized for most applications in visual servoing. Image Based Visual Servoing architecture is used for controlling a 3 d.o.f. articular system using two cameras in eye to hand configuration. Tests in static and dynamic cases were carried out, and showed that the performance of estimated Jacobian by using the properties of the epipolar geometry is such as good and robust against noise as the analytic Jacobian. This fact is considered as an advantage because the estimated Jacobian does not need laborious previous work prior the control task in contrast to the analytic Jacobian does

    Intuitive human interactive with an arm robot for severely handicapped people - A one click approach.

    Get PDF
    International audienceAssistance to disabled people is still a domain in which a lot of progress needs to be done. The more severe the handicap is, more complex are the devices, implying increased efforts to simplify the interactions between man and these devices. In this document we propose a solution to reduce the interaction between a user and a robotic arm. The system is equipped with two cameras. One is fixed on the top of the wheelchair (eye-to-hand) and the other one is mounted on the end effector of the robotic arm (eye-in-hand). The two cameras cooperate to reduce the grasping task to one click. The method is generic, it does not require marks on the object, geometrical model or the database. It thus provides a tool applicable to any kind of graspable object. The paper first gives an overview of the existing grasping tools for disabled people and proposes a novel approach toward an intuitive human machine interaction

    Image based visual servoing using bitangent points applied to planar shape alignment

    Get PDF
    We present visual servoing strategies based on bitangents for aligning planar shapes. In order to acquire bitangents we use convex-hull of a curve. Bitangent points are employed in the construction of a feature vector to be used in visual control. Experimental results obtained on a 7 DOF Mitsubishi PA10 robot, verifies the proposed method

    Fast and robust image feature matching methods for computer vision applications

    Get PDF
    Service robotic systems are designed to solve tasks such as recognizing and manipulating objects, understanding natural scenes, navigating in dynamic and populated environments. It's immediately evident that such tasks cannot be modeled in all necessary details as easy as it is with industrial robot tasks; therefore, service robotic system has to have the ability to sense and interact with the surrounding physical environment through a multitude of sensors and actuators. Environment sensing is one of the core problems that limit the deployment of mobile service robots since existing sensing systems are either too slow or too expensive. Visual sensing is the most promising way to provide a cost effective solution to the mobile robot sensing problem. It's usually achieved using one or several digital cameras placed on the robot or distributed in its environment. Digital cameras are information rich sensors and are relatively inexpensive and can be used to solve a number of key problems for robotics and other autonomous intelligent systems, such as visual servoing, robot navigation, object recognition, pose estimation, and much more. The key challenges to taking advantage of this powerful and inexpensive sensor is to come up with algorithms that can reliably and quickly extract and match the useful visual information necessary to automatically interpret the environment in real-time. Although considerable research has been conducted in recent years on the development of algorithms for computer and robot vision problems, there are still open research challenges in the context of the reliability, accuracy and processing time. Scale Invariant Feature Transform (SIFT) is one of the most widely used methods that has recently attracted much attention in the computer vision community due to the fact that SIFT features are highly distinctive, and invariant to scale, rotation and illumination changes. In addition, SIFT features are relatively easy to extract and to match against a large database of local features. Generally, there are two main drawbacks of SIFT algorithm, the first drawback is that the computational complexity of the algorithm increases rapidly with the number of key-points, especially at the matching step due to the high dimensionality of the SIFT feature descriptor. The other one is that the SIFT features are not robust to large viewpoint changes. These drawbacks limit the reasonable use of SIFT algorithm for robot vision applications since they require often real-time performance and dealing with large viewpoint changes. This dissertation proposes three new approaches to address the constraints faced when using SIFT features for robot vision applications, Speeded up SIFT feature matching, robust SIFT feature matching and the inclusion of the closed loop control structure into object recognition and pose estimation systems. The proposed methods are implemented and tested on the FRIEND II/III service robotic system. The achieved results are valuable to adapt SIFT algorithm to the robot vision applications

    One Click Focus with Eye-in-hand/Eye-to-hand Cooperation

    Get PDF
    A critical assumption of many multi-view control systems is the initial visibility of the regions of interest from all the views. An initialization step is proposed for a hybrid eye-in-hand/eye-to-hand grasping system to overcome this requirement. In this paper, the object of interest is assumed to be within the eye-to-hand field of view, whereas it may not be within the eye-in-hand one. The object model is unknown and no database is used. The object lies in a complex scene with a cluttered background. A method to automatically focus the object of interest is presented, tested and validated on a multi view robotic system

    Visual servoing of mobile robots using non-central catadioptric cameras

    Get PDF
    This paper presents novel contributions on image-based control of a mobile robot using a general catadioptric camera model. A catadioptric camera is usually made up by a combination of a conventional camera and a curved mirror resulting in an omnidirectional sensor capable of providing 360° panoramic views of a scene. Modeling such cameras has been the subject of significant research interest in the computer vision community leading to a deeper understanding of the image properties and also to different models for different types of configurations. Visual servoing applications using catadioptric cameras have essentially been using central cameras and the corresponding unified projection model. So far only in a few cases more general models have been used. In this paper we address the problem of visual servoing using the so-called radial model. The radial model can be applied to many camera configurations and in particular to non-central catadioptric systems with mirrors that are symmetric around an axis coinciding with the optical axis. In this case, we show that the radial model can be used with a non-central catadioptric camera to allow effective image-based visual servoing (IBVS) of a mobile robot. Using this model, which is valid for a large set of catadioptric cameras (central or non-central), new visual features are proposed to control the degrees of freedom of a mobile robot moving on a plane. In addition to several simulation results, a set of experiments was carried out on Robot Operating System (ROS)-based platform which validates the applicability, effectiveness and robustness of the proposed method for image-based control of a non-holonomic robot

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors
    • 

    corecore