1,367 research outputs found

    Estimation of tool-tissue forces in robot-assisted minimally invasive surgery using neural networks

    Get PDF
    A new algorithm is proposed to estimate the tool-tissue force interaction in robot-assisted minimally invasive surgery which does not require the use of external force sensing. The proposed method utilizes the current of the motors of the surgical instrument and neural network methods to estimate the force interaction. Offline and online testing is conducted to assess the feasibility of the developed algorithm. Results showed that the developed method has promise in allowing online estimation of tool-tissue force and could thus enable haptic feedback in robotic surgery to be provided

    Towards retrieving force feedback in robotic-assisted surgery: a supervised neuro-recurrent-vision approach

    Get PDF
    Robotic-assisted minimally invasive surgeries have gained a lot of popularity over conventional procedures as they offer many benefits to both surgeons and patients. Nonetheless, they still suffer from some limitations that affect their outcome. One of them is the lack of force feedback which restricts the surgeon's sense of touch and might reduce precision during a procedure. To overcome this limitation, we propose a novel force estimation approach that combines a vision based solution with supervised learning to estimate the applied force and provide the surgeon with a suitable representation of it. The proposed solution starts with extracting the geometry of motion of the heart's surface by minimizing an energy functional to recover its 3D deformable structure. A deep network, based on a LSTM-RNN architecture, is then used to learn the relationship between the extracted visual-geometric information and the applied force, and to find accurate mapping between the two. Our proposed force estimation solution avoids the drawbacks usually associated with force sensing devices, such as biocompatibility and integration issues. We evaluate our approach on phantom and realistic tissues in which we report an average root-mean square error of 0.02 N.Peer ReviewedPostprint (author's final draft

    V-ANFIS for Dealing with Visual Uncertainty for Force Estimation in Robotic Surgery

    Get PDF
    Accurate and robust estimation of applied forces in Robotic-Assisted Minimally Invasive Surgery is a very challenging task. Many vision-based solutions attempt to estimate the force by measuring the surface deformation after contacting the surgical tool. However, visual uncertainty, due to tool occlusion, is a major concern and can highly affect the results' precision. In this paper, a novel design of an adaptive neuro-fuzzy inference strategy with a voting step (V-ANFIS) is used to accommodate with this loss of information. Experimental results show a significant accuracy improvement from 50% to 77% with respect to other proposals.Peer ReviewedPostprint (published version

    Toward Force Estimation in Robot-Assisted Surgery using Deep Learning with Vision and Robot State

    Full text link
    Knowledge of interaction forces during teleoperated robot-assisted surgery could be used to enable force feedback to human operators and evaluate tissue handling skill. However, direct force sensing at the end-effector is challenging because it requires biocompatible, sterilizable, and cost-effective sensors. Vision-based deep learning using convolutional neural networks is a promising approach for providing useful force estimates, though questions remain about generalization to new scenarios and real-time inference. We present a force estimation neural network that uses RGB images and robot state as inputs. Using a self-collected dataset, we compared the network to variants that included only a single input type, and evaluated how they generalized to new viewpoints, workspace positions, materials, and tools. We found that vision-based networks were sensitive to shifts in viewpoints, while state-only networks were robust to changes in workspace. The network with both state and vision inputs had the highest accuracy for an unseen tool, and was moderately robust to changes in viewpoints. Through feature removal studies, we found that using only position features produced better accuracy than using only force features as input. The network with both state and vision inputs outperformed a physics-based baseline model in accuracy. It showed comparable accuracy but faster computation times than a baseline recurrent neural network, making it better suited for real-time applications.Comment: 7 pages, 6 figures, submitted to ICRA 202

    A recurrent convolutional neural network approach for sensorless force estimation in robotic surgery

    Get PDF
    Providing force feedback as relevant information in current Robot-Assisted Minimally Invasive Surgery systems constitutes a technological challenge due to the constraints imposed by the surgical environment. In this context, force estimation techniques represent a potential solution, enabling to sense the interaction forces between the surgical instruments and soft-tissues. Specifically, if visual feedback is available for observing soft-tissues’ deformation, this feedback can be used to estimate the forces applied to these tissues. To this end, a force estimation model, based on Convolutional Neural Networks and Long-Short Term Memory networks, is proposed in this work. This model is designed to process both, the spatiotemporal information present in video sequences and the temporal structure of tool data (the surgical tool-tip trajectory and its grasping status). A series of analyses are carried out to reveal the advantages of the proposal and the challenges that remain for real applications. This research work focuses on two surgical task scenarios, referred to as pushing and pulling tissue. For these two scenarios, different input data modalities and their effect on the force estimation quality are investigated. These input data modalities are tool data, video sequences and a combination of both. The results suggest that the force estimation quality is better when both, the tool data and video sequences, are processed by the neural network model. Moreover, this study reveals the need for a loss function, designed to promote the modeling of smooth and sharp details found in force signals. Finally, the results show that the modeling of forces due to pulling tasks is more challenging than for the simplest pushing actions.Peer ReviewedPostprint (author's final draft

    Estimation of interaction forces in robotic surgery using a semi-supervised deep neural network model

    Get PDF
    Providing force feedback as a feature in current Robot-Assisted Minimally Invasive Surgery systems still remains a challenge. In recent years, Vision-Based Force Sensing (VBFS) has emerged as a promising approach to address this problem. Existing methods have been developed in a Supervised Learning (SL) setting. Nonetheless, most of the video sequences related to robotic surgery are not provided with ground-truth force data, which can be easily acquired in a controlled environment. A powerful approach to process unlabeled video sequences and find a compact representation for each video frame relies on using an Unsupervised Learning (UL) method. Afterward, a model trained in an SL setting can take advantage of the available ground-truth force data. In the present work, UL and SL techniques are used to investigate a model in a Semi-Supervised Learning (SSL) framework, consisting of an encoder network and a Long-Short Term Memory (LSTM) network. First, a Convolutional Auto-Encoder (CAE) is trained to learn a compact representation for each RGB frame in a video sequence. To facilitate the reconstruction of high and low frequencies found in images, this CAE is optimized using an adversarial framework and a L1-loss, respectively. Thereafter, the encoder network of the CAE is serially connected with an LSTM network and trained jointly to minimize the difference between ground-truth and estimated force data. Datasets addressing the force estimation task are scarce. Therefore, the experiments have been validated in a custom dataset. The results suggest that the proposed approach is promising.Peer ReviewedPostprint (author's final draft

    Sight to touch: 3D diffeomorphic deformation recovery with mixture components for perceiving forces in robotic-assisted surgery

    Get PDF
    Robotic-assisted minimally invasive surgical sys-tems suffer from one major limitation which is the lack of interaction forces feedback. The restricted sense of touch hinders the surgeons’ performance and reduces their dexterity and precision during a procedure. In this work, we present a sensory substitution approach that relies on visual stimuli to transmit the tool-tissue interaction forces to the operating surgeon. Our approach combines a 3D diffeomorphic defor-mation mapping with a generative model to precisely label the force level. The main highlights of our approach are that the use of diffeomorphic transformation ensures anatomical structure preservation and the label assignment is based on a parametric form of several mixture elements. We performed experimentations on both ex-vivo and in-vivo datasets and offer careful numerical results evaluating our approach. The results show that our solution has an error measure less than 1mm in all directions and an average labeling error of 2.05%. It can also be applicable to other scenarios that require force feedback such as microsurgery, knot tying or needle-based procedures.Peer ReviewedPostprint (author's final draft
    • …
    corecore