23 research outputs found

    Weakly Calibrated Stereoscopic Visual Servoing for Laser Steering: Application to Phonomicrosurgery.

    No full text
    International audienceThis paper deals with the study of a weakly calibrated multiview visual servoing control law for microrobotic laser phonomicrosurgery of the vocal folds. It consists of the development of an endoluminal surgery system for laserablation and resection of cancerous tissues. More specifically, this paper focuses on the part concerning the control of the laser spot displacement during surgical interventions. To perform this, a visual control law based on trifocal geometry is designed using two cameras and a laser source (virtual camera). The method is validated on a realistic testbench and the straight point-to-point trajectories are demonstrated

    A novel 1D trifocal tensor-based control for differential-drive robots

    Full text link

    Photometric visual servoing for omnidirectional cameras

    Get PDF
    International audience2D visual servoing consists in using data provided by a vision sensor for controlling the motions of a dynamic system. Most of visual servoing approaches has relied on the geometric features that have to be tracked and matched in the image acquired by the camera. Recent works have highlighted the interest of taking into account the photometric information of the entire image. This approach was tackled with images of perspective cameras. We propose, in this paper, to extend this technique to central cameras. This generalization allows to apply this kind of method to catadioptric cameras and wide field of view cameras. Several experiments have been successfully done with a fisheye camera in order to control a 6 degrees of freedom (dof) robot and with a catadioptric camera for a mobile robot navigation task

    Visual servoing of mobile robots using non-central catadioptric cameras

    Get PDF
    This paper presents novel contributions on image-based control of a mobile robot using a general catadioptric camera model. A catadioptric camera is usually made up by a combination of a conventional camera and a curved mirror resulting in an omnidirectional sensor capable of providing 360° panoramic views of a scene. Modeling such cameras has been the subject of significant research interest in the computer vision community leading to a deeper understanding of the image properties and also to different models for different types of configurations. Visual servoing applications using catadioptric cameras have essentially been using central cameras and the corresponding unified projection model. So far only in a few cases more general models have been used. In this paper we address the problem of visual servoing using the so-called radial model. The radial model can be applied to many camera configurations and in particular to non-central catadioptric systems with mirrors that are symmetric around an axis coinciding with the optical axis. In this case, we show that the radial model can be used with a non-central catadioptric camera to allow effective image-based visual servoing (IBVS) of a mobile robot. Using this model, which is valid for a large set of catadioptric cameras (central or non-central), new visual features are proposed to control the degrees of freedom of a mobile robot moving on a plane. In addition to several simulation results, a set of experiments was carried out on Robot Operating System (ROS)-based platform which validates the applicability, effectiveness and robustness of the proposed method for image-based control of a non-holonomic robot

    Ego-motion estimation using rectified stereo and bilateral transfer function

    Get PDF
    We describe an ego-motion algorithm based on dense spatio-temporal correspondences, using semi-global stereo matching (SGM) and bilateral image warping in time. The main contribution is an improvement in accuracy and robustness of such techniques, by taking care of speed and numerical stability, while employing twice the structure and data for the motion estimation task, in a symmetric way. In our approach we keep the tasks of structure and motion estimation separated, respectively solved by the SGM and by our pose estimation algorithm. Concerning the latter, we show the benefits introduced by our rectified, bilateral formulation, that provides at the same time more robustness to noise and disparity errors, at the price of a moderate increase in computational complexity, further reduced by an improved Gauss-Newton descent

    Efficient and secure real-time mobile robots cooperation using visual servoing

    Get PDF
    This paper deals with the challenging problem of navigation in formation of mobiles robots fleet. For that purpose, a secure approach is used based on visual servoing to control velocities (linear and angular) of the multiple robots. To construct our system, we develop the interaction matrix which combines the moments in the image with robots velocities and we estimate the depth between each robot and the targeted object. This is done without any communication between the robots which eliminate the problem of the influence of each robot errors on the whole. For a successful visual servoing, we propose a powerful mechanism to execute safely the robots navigation, exploiting a robot accident reporting system using raspberry Pi3. In addition, in case of problem, a robot accident detection reporting system testbed is used to send an accident notification, in the form of a specifical message. Experimental results are presented using nonholonomic mobiles robots with on-board real time cameras, to show the effectiveness of the proposed method

    Visual Servoing

    Get PDF
    International audienceThis chapter introduces visual servo control, using computer vision data in the servo loop to control the motion of a robot. We first describe the basic techniques that are by now well established in the field. We give a general overview of the formulation of the visual servo control problem, and describe the two archetypal visual servo control schemes: image-based and pose-based visual servo control. We then discuss performance and stability issues that pertain to these two schemes, motivating advanced techniques. Of the many advanced techniques that have been developed , we discuss 2.5-D, hybrid, partitioned, and switched approaches. Having covered a variety of control schemes, we deal with target tracking and controlling motion directly in the joint space and extensions to under-actuated ground and aerial robots. We conclude by describing applications of visual ser-voing in robotics

    Computationally-efficient visual inertial odometry for autonomous vehicle

    Get PDF
    This thesis presents the design, implementation, and validation of a novel nonlinearfiltering based Visual Inertial Odometry (VIO) framework for robotic navigation in GPSdenied environments. The system attempts to track the vehicle’s ego-motion at each time instant while capturing the benefits of both the camera information and the Inertial Measurement Unit (IMU). VIO demands considerable computational resources and processing time, and this makes the hardware implementation quite challenging for micro- and nanorobotic systems. In many cases, the VIO process selects a small subset of tracked features to reduce the computational cost. VIO estimation also suffers from the inevitable accumulation of error. This limitation makes the estimation gradually diverge and even fail to track the vehicle trajectory over long-term operation. Deploying optimization for the entire trajectory helps to minimize the accumulative errors, but increases the computational cost significantly. The VIO hardware implementation can utilize a more powerful processor and specialized hardware computing platforms, such as Field Programmable Gate Arrays, Graphics Processing Units and Application-Specific Integrated Circuits, to accelerate the execution. However, the computation still needs to perform identical computational steps with similar complexity. Processing data at a higher frequency increases energy consumption significantly. The development of advanced hardware systems is also expensive and time-consuming. Consequently, the approach of developing an efficient algorithm will be beneficial with or without hardware acceleration. The research described in this thesis proposes multiple solutions to accelerate the visual inertial odometry computation while maintaining a comparative estimation accuracy over long-term operation among state-ofthe- art algorithms. This research has resulted in three significant contributions. First, this research involved the design and validation of a novel nonlinear filtering sensor-fusion algorithm using trifocal tensor geometry and a cubature Kalman filter. The combination has handled the system nonlinearity effectively, while reducing the computational cost and system complexity significantly. Second, this research develops two solutions to address the error accumulation issue. For standalone self-localization projects, the first solution applies a local optimization procedure for the measurement update, which performs multiple corrections on a single measurement to optimize the latest filter state and covariance. For larger navigation projects, the second solution integrates VIO with additional pseudo-ranging measurements between the vehicle and multiple beacons in order to bound the accumulative errors. Third, this research develops a novel parallel-processing VIO algorithm to speed up the execution using a multi-core CPU. This allows the distribution of the filtering computation on each core to process and optimize each feature measurement update independently. The performance of the proposed visual inertial odometry framework is evaluated using publicly-available self-localization datasets, for comparison with some other open-source algorithms. The results illustrate that a proposed VIO framework is able to improve the VIO’s computational efficiency without the installation of specialized hardware computing platforms and advanced software libraries

    Inteligentno stereo-vizuelno upravljanje mobilnih robota i optimalno terminiranje tehnoloških procesa - pregled rezultata istraživanja u okviru projekta MISSION4.0/Intelligent stereo-visual mobile robot control and optimal process planning and scheduling – overview of research results within the project MISSION4.0

    Get PDF
    Projekat MISSION4.0 podrazumevao je, u okviru nekoliko radnih paketa, razvoj inteligentnog stereo-vizuelnog upravljanja mobilnih robota, kao i optimalno planiranje i terminiranje tehnoloških procesa, i to baziranim na tehnikama veštačke inteligencije, posebno na konvolucionim veštačkim neuronskim mrežama i biološki inspirisanim algoritmima optimizacije. Tokom dvogodišnjih intenzivnih naučnih istraživanja razvijena je nova metodologija za autonomnu navigaciju i inteligentno upravljanje mobilnih robota sopstvenog razvoja, nazvanih RAICO i DOMINO. Generisanje optimalnog plana terminiranja tehnoloških procesa, u okviru koga se izvršava i inteligentni unutrašnji transport korišćenjem mobilnih robota, takođe je bio jedan od važnih ciljeva ovih naprednih istraživanja. U ovom radu, dat je pregled nekih od ključnih rezultata projekta MISSION4.0, poput publikovanih u vodećim međunarodnim i nacionalnim naučnim časopisima, objavljenih poglavlja u naučnim monografijama, saopštenih i odštampanih naučnih radova u zbornicima prestižnih konferencija održanih u inostranstvu i regionu, zatim u okviru verifikovanih tehničkih rešenja, kao i preko skupova podataka sa otvorenim pristupom

    Inteligentno stereo-vizuelno upravljanje mobilnih robota i optimalno terminiranje tehnoloških procesa - pregled rezultata istraživanja u okviru projekta MISSION4.0/Intelligent stereo-visual mobile robot control and optimal process planning and scheduling – overview of research results within the project MISSION4.0

    Get PDF
    Projekat MISSION4.0 podrazumevao je, u okviru nekoliko radnih paketa, razvoj inteligentnog stereo-vizuelnog upravljanja mobilnih robota, kao i optimalno planiranje i terminiranje tehnoloških procesa, i to baziranim na tehnikama veštačke inteligencije, posebno na konvolucionim veštačkim neuronskim mrežama i biološki inspirisanim algoritmima optimizacije. Tokom dvogodišnjih intenzivnih naučnih istraživanja razvijena je nova metodologija za autonomnu navigaciju i inteligentno upravljanje mobilnih robota sopstvenog razvoja, nazvanih RAICO i DOMINO. Generisanje optimalnog plana terminiranja tehnoloških procesa, u okviru koga se izvršava i inteligentni unutrašnji transport korišćenjem mobilnih robota, takođe je bio jedan od važnih ciljeva ovih naprednih istraživanja. U ovom radu, dat je pregled nekih od ključnih rezultata projekta MISSION4.0, poput publikovanih u vodećim međunarodnim i nacionalnim naučnim časopisima, objavljenih poglavlja u naučnim monografijama, saopštenih i odštampanih naučnih radova u zbornicima prestižnih konferencija održanih u inostranstvu i regionu, zatim u okviru verifikovanih tehničkih rešenja, kao i preko skupova podataka sa otvorenim pristupom
    corecore