23 research outputs found

    GRACE: Online Gesture Recognition for Autonomous Camera-Motion Enhancement in Robot-Assisted Surgery

    Get PDF
    Camera navigation in minimally invasive surgery changed significantly since the introduction of robotic assistance. Robotic surgeons are subjected to a cognitive workload increase due to the asynchronous control over tools and camera, which also leads to interruptions in the workflow. Camera motion automation has been addressed as a possible solution, but still lacks situation awareness. We propose an online surgical Gesture Recognition for Autonomous Camera-motion Enhancement (GRACE) system to introduce situation awareness in autonomous camera navigation. A recurrent neural network is used in combination with a tool tracking system to offer gesture-specific camera motion during a robotic-assisted suturing task. GRACE was integrated with a research version of the da Vinci surgical system and a user study (involving 10 participants) was performed to evaluate the benefits introduced by situation awareness in camera motion, both with respect to a state of the art autonomous system (S) and current clinical approach (P). Results show GRACE improving completion time by a median reduction of 18.9s (8.1% ) with respect to S and 65.1s (21.1% ) with respect to P. Also, workload reduction was confirmed by statistical difference in the NASA Task Load Index with respect to S (p < 0.05). Reduction of motion sickness, a common issue related to continuous camera motion of autonomous systems, was assessed by a post-experiment survey ( p < 0.01 )

    Learning Deep Nets for Gravitational Dynamics with Unknown Disturbance through Physical Knowledge Distillation: Initial Feasibility Study

    Full text link
    Learning high-performance deep neural networks for dynamic modeling of high Degree-Of-Freedom (DOF) robots remains challenging due to the sampling complexity. Typical unknown system disturbance caused by unmodeled dynamics (such as internal compliance, cables) further exacerbates the problem. In this paper, a novel framework characterized by both high data efficiency and disturbance-adapting capability is proposed to address the problem of modeling gravitational dynamics using deep nets in feedforward gravity compensation control for high-DOF master manipulators with unknown disturbance. In particular, Feedforward Deep Neural Networks (FDNNs) are learned from both prior knowledge of an existing analytical model and observation of the robot system by Knowledge Distillation (KD). Through extensive experiments in high-DOF master manipulators with significant disturbance, we show that our method surpasses a standard Learning-from-Scratch (LfS) approach in terms of data efficiency and disturbance adaptation. Our initial feasibility study has demonstrated the potential of outperforming the analytical teacher model as the training data increases

    An Open-Source Research Kit for the da Vinci ® Surgical System

    Get PDF
    Abstract-We present a telerobotics research platform that provides complete access to all levels of control via opensource electronics and software. The electronics employs an FPGA to enable a centralized computation and distributed I/O architecture in which all control computations are implemented in a familiar development environment (Linux PC) and lowlatency I/O is performed over an IEEE-1394a (FireWire) bus at speeds up to 400 Mbits/sec. The mechanical components are obtained from retired first-generation da Vinci R Surgical Systems. This system is currently installed at 11 research institutions, with additional installations underway, thereby creating a research community around a common open-source hardware and software platform

    ARssist: augmented reality on a head-mounted display for the first assistant in robotic surgery

    No full text
    In robot-assisted laparoscopic surgery, the first assistant (FA) is responsible for tasks such as robot docking, passing necessary materials, manipulating hand-held instruments, and helping with trocar planning and placement. The performance of the FA is critical for the outcome of the surgery. The authors introduce ARssist, an augmented reality application based on an optical see-through head-mounted display, to help the FA perform these tasks. ARssist offers (i) real-time three-dimensional rendering of the robotic instruments, hand-held instruments, and endoscope based on a hybrid tracking scheme and (ii) real-time stereo endoscopy that is configurable to suit the FA's hand–eye coordination when operating based on endoscopy feedback. ARssist has the potential to help the FA perform his/her task more efficiently, and hence improve the outcome of robot-assisted laparoscopic surgeries

    Models and Algorithms for the Collision of Rigid and Deformable Bodies

    Get PDF
    In this report, we describe models and algorithms designed to produce efficient and physically consistent dynamic simulations. These models and algorithms have been implemented in a unique framework, modeling both deformations and contacts through visco-elastic relations. Since this model of interaction (known as «penalty based») is much debated, we present a compared study of two contact models; «penalty» and «impulse». Indeed, the «penalty» based model is supposed to have two major drawbacks : - determin- ing the visco-elastic parameters and - choosing the computation time step. We present a solution for both problems based on physical concepts. Finally, we will present results comparing real data, «impulse» based simulation and «penalty» based simulation

    Scene Modeling And Augmented Virtuality Interface For Telerobotic Satellite Servicing

    No full text
    Teleoperation in extreme environments can be hindered by limitations in telemetry and in operator perception of the remote environment. Often, the primary mode of perception is visual feedback from remote cameras, which do not always provide suitable views and are subject to telemetry delays. To address these challenges, we propose to build a model of the remote environment and provide an augmented virtuality visualization system that augments the model with projections of real camera images. The approach is demonstrated in a satellite servicing scenario, with a multisecond round-trip telemetry delay between the operator on Earth and the satellite on orbit. The scene modeling enables both virtual fixtures to assist the human operator and augmented virtuality visualization that allows the operator to teleoperate a virtual robot from a convenient virtual viewpoint, with the delayed camera images projected onto the three-dimensional model. Experiments on a ground-based telerobotic platform, with software-created telemetry delays, indicate that the proposed method leads to better teleoperation performance with 30% better blade alignment and 50% reduction in task execution time compared to the baseline case where visualization is restricted to the available camera views
    corecore