43 research outputs found

    Weakly Calibrated Stereoscopic Visual Servoing for Laser Steering: Application to Phonomicrosurgery.

    No full text
    International audienceThis paper deals with the study of a weakly calibrated multiview visual servoing control law for microrobotic laser phonomicrosurgery of the vocal folds. It consists of the development of an endoluminal surgery system for laserablation and resection of cancerous tissues. More specifically, this paper focuses on the part concerning the control of the laser spot displacement during surgical interventions. To perform this, a visual control law based on trifocal geometry is designed using two cameras and a laser source (virtual camera). The method is validated on a realistic testbench and the straight point-to-point trajectories are demonstrated

    Preliminary variation on multiview geometry for vision-guided laser surgery.

    No full text
    International audienceThis paper proposes to use the multiview geometry to control an orientable laser beam for surgery. Two methods are proposed based on the analogy between a scanning laser beam and a camera: the first method uses one camera and the laser scanner as a virtual camera to form a virtual stereoscopic system while the second method uses two cameras to form a virtual trifocal system. Using the associated epipolar or trifocal geometry, two control laws are derived without any matrix inversion nor estimation of the 3D scene. It is shown that the more geometry is used, the simpler the control gets. These control laws show, as expected, exponential convergence in simulation validation

    Visual servoing of mobile robots using non-central catadioptric cameras

    Get PDF
    This paper presents novel contributions on image-based control of a mobile robot using a general catadioptric camera model. A catadioptric camera is usually made up by a combination of a conventional camera and a curved mirror resulting in an omnidirectional sensor capable of providing 360° panoramic views of a scene. Modeling such cameras has been the subject of significant research interest in the computer vision community leading to a deeper understanding of the image properties and also to different models for different types of configurations. Visual servoing applications using catadioptric cameras have essentially been using central cameras and the corresponding unified projection model. So far only in a few cases more general models have been used. In this paper we address the problem of visual servoing using the so-called radial model. The radial model can be applied to many camera configurations and in particular to non-central catadioptric systems with mirrors that are symmetric around an axis coinciding with the optical axis. In this case, we show that the radial model can be used with a non-central catadioptric camera to allow effective image-based visual servoing (IBVS) of a mobile robot. Using this model, which is valid for a large set of catadioptric cameras (central or non-central), new visual features are proposed to control the degrees of freedom of a mobile robot moving on a plane. In addition to several simulation results, a set of experiments was carried out on Robot Operating System (ROS)-based platform which validates the applicability, effectiveness and robustness of the proposed method for image-based control of a non-holonomic robot

    Technical report on Optimization-Based Bearing-Only Visual Homing with Applications to a 2-D Unicycle Model

    Full text link
    We consider the problem of bearing-based visual homing: Given a mobile robot which can measure bearing directions with respect to known landmarks, the goal is to guide the robot toward a desired "home" location. We propose a control law based on the gradient field of a Lyapunov function, and give sufficient conditions for global convergence. We show that the well-known Average Landmark Vector method (for which no convergence proof was known) can be obtained as a particular case of our framework. We then derive a sliding mode control law for a unicycle model which follows this gradient field. Both controllers do not depend on range information. Finally, we also show how our framework can be used to characterize the sensitivity of a home location with respect to noise in the specified bearings. This is an extended version of the conference paper [1].Comment: This is an extender version of R. Tron and K. Daniilidis, "An optimization approach to bearing-only visual homing with applications to a 2-D unicycle model," in IEEE International Conference on Robotics and Automation, 2014, containing additional proof

    Appearance-based Indoor Navigation by IBVS using Line Segments

    Get PDF
    Also presented in IEEE Int. Conf. on Robotics and Automation, Stockolm, SwedenInternational audienc

    A novel 1D trifocal tensor-based control for differential-drive robots

    Full text link

    Photometric visual servoing for omnidirectional cameras

    Get PDF
    International audience2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom (dof) robot and with a catadioptric camera for a mobile robot navigation task

    Ego-motion estimation using rectified stereo and bilateral transfer function

    Get PDF
    We describe an ego-motion algorithm based on dense spatio-temporal correspondences, using semi-global stereo matching (SGM) and bilateral image warping in time. The main contribution is an improvement in accuracy and robustness of such techniques, by taking care of speed and numerical stability, while employing twice the structure and data for the motion estimation task, in a symmetric way. In our approach we keep the tasks of structure and motion estimation separated, respectively solved by the SGM and by our pose estimation algorithm. Concerning the latter, we show the benefits introduced by our rectified, bilateral formulation, that provides at the same time more robustness to noise and disparity errors, at the price of a moderate increase in computational complexity, further reduced by an improved Gauss-Newton descent
    corecore