6 research outputs found

    Toward Force Estimation in Robot-Assisted Surgery using Deep Learning with Vision and Robot State

    Full text link
    Knowledge of interaction forces during teleoperated robot-assisted surgery could be used to enable force feedback to human operators and evaluate tissue handling skill. However, direct force sensing at the end-effector is challenging because it requires biocompatible, sterilizable, and cost-effective sensors. Vision-based deep learning using convolutional neural networks is a promising approach for providing useful force estimates, though questions remain about generalization to new scenarios and real-time inference. We present a force estimation neural network that uses RGB images and robot state as inputs. Using a self-collected dataset, we compared the network to variants that included only a single input type, and evaluated how they generalized to new viewpoints, workspace positions, materials, and tools. We found that vision-based networks were sensitive to shifts in viewpoints, while state-only networks were robust to changes in workspace. The network with both state and vision inputs had the highest accuracy for an unseen tool, and was moderately robust to changes in viewpoints. Through feature removal studies, we found that using only position features produced better accuracy than using only force features as input. The network with both state and vision inputs outperformed a physics-based baseline model in accuracy. It showed comparable accuracy but faster computation times than a baseline recurrent neural network, making it better suited for real-time applications.Comment: 7 pages, 6 figures, submitted to ICRA 202

    ..

    Full text link
    [ES] El aumento de los procedimientos usando la robótica quirúrgica en la última década demanda un alto número de cirujanos, capaces de teleoperar sistemas avanzados y complejos y, al mismo tiempo, de aprovechar los beneficios de la Cirugía Asistida por Robot de forma segura y efectiva. En la actualidad, los planes de formación se basan en la Realidad Virtual y entornos simulados para lograr un establecimiento escalable, rentable y completo del conjunto de habilidades quirúrgicas robóticas. Este trabajo se centra en el desarrolloo de un una escenario clínico mediante sensores que asistan al ciruajano durante su entrenamiento con el daVinci®, implementados en un entorno físico impreso en 3D. Esta investigación busca la obtención de un modelo segmentado, la impresión 3D del modelo para simular el escenraio clínico real y así abituar al cirujano a la interacción de los órganos y tejidos con el robot; y la implementación de sensores con que asistir al cirjuano en el entrenamiento. Para ello, con el fin de demostrar la eficacia de la asistencia durante los entrenamientos, así como la validez de los ejercicios de la operación simulada se ha realizado un estudio con doce voluntarios.Tanto la asistencia visual como el uso de fantomas 3D muestran ser una alternativa óptima para el aprendizaje de la habilidades requeridas en la cirugía robótica: manifestandose un paso adelante hacia un entrenamiento personlizado para cada cirujano.[EN] The increase of surgical procedures using robotic technology in the last decade demands a high number of surgeons capable of teleoperating advanced and complex systems while safely and effectively taking advantage of Robot-Assisted Surgery benefits. Currently, training plans rely on Virtual Reality and simulated environments to achieve a scalable, cost-effective, and comprehensive establishment of robotic surgical skills. This work focuses on the development of a clinical scenario through sensors that assist the surgeon during their training with the daVinci® system, implemented in a 3D-printed physical environment. This research aims to obtain a segmented model, 3D printing the model to simulate the real clinical scenario, thus familiarizing the surgeon with the interaction of organs and tissues with the robot. Additionally, sensors are implemented to assist the surgeon during training. Therefore, to demonstrate the effectiveness of the assistance during the training sessions and the validity of the exercises in the simulated operation, a study was conducted with twelve volunteers. Both visual assistance and the use of 3D phantoms prove to be an optimal alternative for learning the required skills in robotic surgery, representing a significant step forward towards personalized training for each surgeon.Castillo Rosique, P. (2023). Development sensorized 3D-printed realistic phantom to scale for surgical training with a daVinci robot. Universitat Politècnica de València. http://hdl.handle.net/10251/19804

    Dynamic Modeling of the da Vinci Research Kit Arm for the Estimation of Interaction Wrench

    No full text
    The commercialized version of the da Vinci robot currently lacks of the haptic feedback to the master arm. Thus, the surgeon has no haptic sense and relies only on the visual feedback. For this reason, in the recent research activities using the da Vinci Research Kit (dVRK) platform, the reflection of the interaction force between the slave tool and the environment to the master manipulator is a topic of high interest. In this work a sensorless model-based approach for contact force and torque estimation is presented and validated. The dynamics of the dVRK slave arm is modeled and its parameters are identified. The idea is to use the joint torques obtained from the measured motor currents and to subtract the torques resulting from the dynamics of the robot arm. The resulting torques are therefore only due to the external forces and torques acting on the tool, which are then obtained through the inverse transpose of the Jacobian matrix. The accuracy of this method is assessed by comparing the estimated wrench to the one measured by a force/torque sensor (ATI mini 45). It is shown that the external wrench is well estimated compared to the measured one

    Interpretable task planning and learning for autonomous robotic surgery with logic programming

    Get PDF
    This thesis addresses the long-term goal of full (supervised) autonomy in surgery, characterized by dynamic environmental (anatomical) conditions, unpredictable workflow of execution and workspace constraints. The scope is to reach autonomy at the level of sub-tasks of a surgical procedure, i.e. repetitive, yet tedious operations (e.g., dexterous manipulation of small objects in a constrained environment, as needle and wire for suturing). This will help reducing time of execution, hospital costs and fatigue of surgeons during the whole procedure, while further improving the recovery time for the patients. A novel framework for autonomous surgical task execution is presented in the first part of this thesis, based on answer set programming (ASP), a logic programming paradigm, for task planning (i.e., coordination of elementary actions and motions). Logic programming allows to directly encode surgical task knowledge, representing emph{plan reasoning methodology} rather than a set of pre-defined plans. This solution introduces several key advantages, as reliable human-like interpretable plan generation, real-time monitoring of the environment and the workflow for ready adaptation and failure recovery. Moreover, an extended review of logic programming for robotics is presented, motivating the choice of ASP for surgery and providing an useful guide for robotic designers. In the second part of the thesis, a novel framework based on inductive logic programming (ILP) is presented for surgical task knowledge learning and refinement. ILP guarantees fast learning from very few examples, a common drawback of surgery. Also, a novel action identification algorithm is proposed based on automatic environmental feature extraction from videos, dealing for the first time with small and noisy datasets collecting different workflows of executions under environmental variations. This allows to define a systematic methodology for unsupervised ILP. All the results in this thesis are validated on a non-standard version of the benchmark training ring transfer task for surgeons, which mimics some of the challenges of real surgery, e.g. constrained bimanual motion in small space
    corecore