46 research outputs found

    Combining Differential Kinematics and Optical Flow for Automatic Labeling of Continuum Robots in Minimally Invasive Surgery

    Get PDF
    International audienceThe segmentation of continuum robots in medical images can be of interest for analyzing surgical procedures or for controlling them. However, the automatic segmentation of continuous and flexible shapes is not an easy task. On one hand conventional approaches are not adapted to the specificities of these instruments, such as imprecise kinematic models, and on the other hand techniques based on deep-learning showed interesting capabilities but need many manually labeled images. In this article we propose a novel approach for segmenting continuum robots on endoscopic images, which requires no prior on the instrument visual appearance and no manual annotation of images. The method relies on the use of the combination of kinematic models and differential kinematic models of the robot and the analysis of optical flow in the images. A cost function aggregating information from the acquired image, from optical flow and from robot encoders is optimized using particle swarm optimization and provides estimated parameters of the pose of the continuum instrument and a mask defining the instrument in the image. In addition a temporal consistency is assessed in order to improve stochastic optimization and reject outliers. The proposed approach has been tested for the robotic instruments of a flexible endoscopy platform both for benchtop acquisitions and an in vivo video. The results show the ability of the technique to correctly segment the instruments without a prior, and in challenging conditions. The obtained segmentation can be used for several applications, for instance for providing automatic labels for machine learning techniques

    Distortion and instability compensation with deep learning for rotational scanning endoscopic optical coherence tomography

    Get PDF
    Optical Coherence Tomography (OCT) is increasingly used in endoluminal procedures since it provides high-speed and high resolution imaging. Distortion and instability of images obtained with a proximal scanning endoscopic OCT system are significant due to the motor rotation irregularity, the friction between the rotating probe and outer sheath and synchronization issues. On-line compensation of artefacts is essential to ensure image quality suitable for real-time assistance during diagnosis or minimally invasive treatment. In this paper, we propose a new online correction method to tackle both B-scan distortion, video stream shaking and drift problem of endoscopic OCT linked to A-line level image shifting. The proposed computational approach for OCT scanning video correction integrates a Convolutional Neural Network (CNN) to improve the estimation of azimuthal shifting of each A-line. To suppress the accumulative error of integral estimation we also introduce another CNN branch to estimate a dynamic overall orientation angle. We train the network with semi-synthetic OCT videos by intentionally adding rotational distortion into real OCT scanning images. The results show that networks trained on this semi-synthetic data generalize to stabilize real OCT videos, and the algorithm efficacy is demonstrated on both ex vivo and in vivo data, where strong scanning artifacts are successfully corrected. (c) 2022 The Authors. Published by Elsevier B.V

    A Novel Telemanipulated Robotic Assistant for Surgical Endoscopy: Preclinical Application to ESD

    Get PDF
    International audienceObjective: Minimally invasive surgical interventions in the gastrointestinal tract, such as Endoscopic Submucosal Dissection (ESD), are very difficult for surgeons when performed with standard flexible endoscopes. Robotic flexible systems have been identified as a solution to improve manipulation. However, only a few such systems have been brought to preclinical trials as of now. As a result, novel robotic tools are required.Methods: We developed a telemanipulated robotic device, called STRAS, which aims to assist surgeons during intraluminal surgical endoscopy. This is a modular system, based on a flexible endoscope and flexible instruments, which provides 10 degrees of freedom (DoFs). The modularity allows to easily set up the robot and to navigate towards the operating area. The robot can then be teleoperated using master interfaces specifically designed to intuitively control all available DoFs. STRAS capabilities have been tested in laboratory conditions and during preclinical experiments. Results: We report twelve colorectal ESDs performed in pigs, in which large lesions were successfully removed. Dissection speeds are compared with those obtained in similar conditions with the manual Anubiscope TM platform from Karl Storz. We show significant improvements (p = 0.01).Conclusion: These experiments show that STRAS (v2) provides sufficient DoFs, workspace and force to perform ESD, that it allows a single surgeon to perform all the surgical tasks and that performances are improved with respect to manual systems. Significance: The concepts developed for STRAS are validated and could bring new tools for surgeons to improve comfort, ease and performances for intraluminal surgical endoscopy

    Contributions to computer assisted suturing in minimally invasive surgery

    No full text
    La suture est un geste courant mais difficile en chirurgie coelioscopique. Les mouvements possibles des instruments sont limités en raison du passage des instruments par un point fixe et le retour visuel fourni est indirect et bidimensionnel. Il est donc très difficile pour les chirurgiens de planifier les mouvements de l'aiguille de suture. En pratique, le passage de l'aiguille dans les tissus fait intervenir des déformations indésirables des tissus.Afin d'aider les chirurgiens à planifier le passage d'une aiguille circulaire dans des tissus fins, nous proposons d'étudier la cinématique des mouvements possibles. Nous recherchons tout d'abord dans quels cas il est possible de trouver des chemins sans déformation entre deux points à la surface des tissus. Nous montrons qu'il existe des conditions simples sur la prise d'aiguille et sur le placement du trocart permettant de garantir l'existence de tels chemins. Nous proposons ensuite une méthode pratique de planification permettant de générer des chemins dits à déformation minimale.La planification de chemin nécessite la connaissance de données spatiales comme la position de l'aiguille dans le porte-aiguille. Nous proposons d'obtenir ces informations en utilisant une caméra endoscopique couleur. Nous utilisons des méthodes de traitement d'images simples permettant d'obtenir l'extraction des informations image en temps réel. La reconstruction des données spatiales est basée sur une optimisation itérative des erreurs de reprojection par asservissement visuel virtuel.Enfin, nous avons proposé des outils de réalité augmentée permettant d'assister le chirurgien durant une suture. Nous montrons également la faisabilité d'une aide robotisée à la suture en réalisant un passage d'aiguille semi-autonome dans des conditions de laboratoire. Ce travail exploratoire ouvre de nombreuses possibilités d'applications pour la mise en place d'une véritable aide à la suture par ordinateur.Suturing is a common but difficult task in laparoscopic surgery. The motions of the surgical instruments are limited because of the trocart constraint and the vision of the scene obtained through an endoscopic camera is reduced to 2D images. Consequently, it is difficult for the surgeons to plan the movements of the suturing needle. Usually, the stitching task is realized by multiple trials and undesirable deformations of the tissues are involved.In order to help the surgeons to plan the motions of a circular needle through thin tissues, we propose to study the kinematics of the needle and the needle-holder in laparoscopic surgery. Firstly, we have been interested in finding pathes between two points on the surface of the tissu which do not involve deformations. We show that there are simple conditions on the trocart position and the needle handling parameters which guarantee the existence of ideal paths. Then, we explain a method to practically plan special paths for which the deformation of the tissue is minimal.Planning paths requires the knowledge of some 3D information such as the position of the needle in the needle-holder and the position of the trocart with respect to the tissues. We propose to use the color endoscopic camera to get this information. We use fast and simple image processing techniques to extract visual cues from the images in real-time. Then the 3D positions are obtained using a virtual visual servoing scheme which iteratively minimizes the forward projection error in the images.Finally, we propose augmented-reality tools to assist the surgeon during stitching. We also show by controlling a medical robot using a 2D visual servoing scheme, that semi-autonomous suturing is possible in laboratory conditions.This exploratory research work opens the path for a complete computer-aided suturing system

    Contributions à la suture assistée par ordinateur en chirurgie mini-invasive

    No full text
    La suture est un geste courant mais difficile en chirurgie coelioscopique. Les mouvements possibles des instruments sont limités en raison du passage par un point fixe et le retour visuel fourni par une caméra endoscopique est indirect et bidimensionnel. Il est donc très difficile pour les chirurgiens de planifier les mouvements de l'aiguille de suture. En pratique, le passage de l'aiguille dans les tissus est réalisé par essais successifs et il fait intervenir des déformations indésirables des tissus. Afin d'aider les chirurgiens à planifier le passage d'une aiguille circulaire dans des tissus fins, nous proposons d'étudier la cinématique des mouvements possibles de l'aiguille et du porte-aiguille en chirurgie coelioscopique. Nous recherchons tout d'abord dans quels cas il est possible de trouver des chemins sans déformation entre deux points à la surface des tissus. Nous montrons qu'il existe des conditions simples sur la prise d'aiguille et sur le placement du trocart permettant de garantir l'existence de tels chemins. Nous proposons ensuite une méthode pratique de planification permettant de générer des chemins dits à déformation minimale. La planification de chemin nécessite la connaissance de données spatiales comme la position de l'aiguille dans le porte-aiguille et la position du trocart par rapport aux tissus. Nous proposons d'obtenir ces informations en utilisant une caméra endoscopique couleur. Nous utilisons des méthodes de traitement d'images simples permettant d'obtenir l'extraction des informations image en temps réel. La reconstruction des données spatiales est basée sur une optimisation itérative des erreurs de reprojection par asservissement visuel virtuel. Enfin, nous avons proposé des outils de réalité augmentée permettant d'assister le chirurgien durant une suture. Nous montrons également la faisabilité d'une aide robotisée à la suture en réalisant un passage d'aiguille semi-autonome, basé sur des asservissements visuels 2D, dans des conditions de laboratoire. Ce travail exploratoire ouvre de nombreuses possibilités d'applications pour la mise en place d'une véritable aide à la suture par ordinateur. Suturing is a common but difficult task in laparoscopic surgery. The motions of the surgical instruments are limited because of the trocart constraint and the vision of the scene obtained through an endoscopic camera is reduced to 2D images. Consequently, it is difficult for the surgeons to plan the movements of the suturing needle. Usually, the stitching task is realized by multiple trials and undesirable deformations of the tissues are involved. In order to help the surgeons to plan the motions of a circular needle through thin tissues, we propose to study the kinematics of the needle and the needle-holder in laparoscopic surgery. Firstly, we have been interested in finding pathes between two points on the surface of the tissu which do not involve deformations. We show that there are simple conditions on the trocart position and the needle handling parameters which guarantee the existence of ideal paths. Then, we explain a method to practically plan special paths for which the deformation of the tissue is minimal. Planning paths requires the knowledge of some 3D information such as the position of the needle in the needle-holder and the position of the trocart with respect to the tissues. We propose to use the color endoscopic camera to get this information. We use fast and simple image processing techniques to extract visual cues from the images in real-time. Then the 3D positions are obtained using a virtual visual servoing scheme which iteratively minimizes the forward projection error in the images. Finally, we propose augmented-reality tools to assist the surgeon during stitching. We also show by controlling a medical robot using a 2D visual servoing scheme, that semi-autonomous suturing is possible in laboratory conditions. This exploratory research work opens the path for a complete computer-aided suturing system

    Contributions to computer assisted suturing in minimally invasive surgery

    No full text
    La suture est un geste courant mais difficile en chirurgie coelioscopique. Les mouvements possibles des instruments sont limités en raison du passage des instruments par un point fixe et le retour visuel fourni est indirect et bidimensionnel. Il est doncSuturing is a common but difficult task in laparoscopic surgery. The motions of the surgical instruments are limited because of the trocart constraint and the vision of the scene obtained through an endoscopic camera is reduced to 2D images. Consequently
    corecore