1,612 research outputs found

    Deep Learning Guided Autonomous Surgery: Guiding Small Needles into Sub-Millimeter Scale Blood Vessels

    Full text link
    We propose a general strategy for autonomous guidance and insertion of a needle into a retinal blood vessel. The main challenges underpinning this task are the accurate placement of the needle-tip on the target vein and a careful needle insertion maneuver to avoid double-puncturing the vein, while dealing with challenging kinematic constraints and depth-estimation uncertainty. Following how surgeons perform this task purely based on visual feedback, we develop a system which relies solely on \emph{monocular} visual cues by combining data-driven kinematic and contact estimation, visual-servoing, and model-based optimal control. By relying on both known kinematic models, as well as deep-learning based perception modules, the system can localize the surgical needle tip and detect needle-tissue interactions and venipuncture events. The outputs from these perception modules are then combined with a motion planning framework that uses visual-servoing and optimal control to cannulate the target vein, while respecting kinematic constraints that consider the safety of the procedure. We demonstrate that we can reliably and consistently perform needle insertion in the domain of retinal surgery, specifically in performing retinal vein cannulation. Using cadaveric pig eyes, we demonstrate that our system can navigate to target veins within 22ÎĽm\mu m XY accuracy and perform the entire procedure in less than 35 seconds on average, and all 24 trials performed on 4 pig eyes were successful. Preliminary comparison study against a human operator show that our system is consistently more accurate and safer, especially during safety-critical needle-tissue interactions. To the best of the authors' knowledge, this work accomplishes a first demonstration of autonomous retinal vein cannulation at a clinically-relevant setting using animal tissues

    Development and evaluation of hand-held robotic technology for safe and successful peripheral intravenous catheterization on pediatric patients

    Get PDF
    Peripheral IntraVenous Catheterization (PIVC) is often required in hospitals to fulfil urgent needs of blood sampling or fluid/medication administration. Despite of the importance of a high success rate, the conventional PIVC operation suffers from low insertion accuracy especially on young pediatric patients. On average, each pediatric patient is submitted to 2.1 attempts before venous access is obtained, with around 50% failure for the first attempt. The risks of such multiple attempts can be severe and life-threatening as they can cause serious extravasation injuries. Given the levels of precision and controllability needed for PIVC, robotic systems show a good potential to effectively assist the operation and improve its success rate. Therefore, this study aims to provide such robotic assistance by focusing on the most challenging and error-prone parts of the operation. In order to understand the difficulties of a pediatric PIVC, a survey investigation is conducted with specialists at the beginning of this research. The feedbacks from this survey indicates an urgent need of a hand-held robot to assist in the catheter insertion control to precisely access the target vein. To achieve the above goal, a novel venipuncture detection system based on sensing the electrical impedance of the contacting tissue at the needle tip has been proposed and developed. Then several ex-vivo and in-vivo experiments were conducted to assess this detection system. Experimental results show that this system can be highly effective to detect venipuncture. Subsequently, based on this venipuncture detection system, four different handheld robots have been developed to provide different levels of autonomy and assistance while executing a PIVC insertion: 1. SVEI, short for \u2018Smart Venous Enter Indicator\u2019, is the simplest device without actuation. The user needs to do the whole PIVC operation, and this device only provides an indication of venipuncture by lighting up an LED. 5 2. SAID, short for \u2018Semi-Autonomous Intravenous access Device\u2019, integrates a motor to control the catheter insertion. The user is required to hold the device still and target it to a vein site. He/She then activates the device. The device inserts the catheter automatically and stops it when venipuncture is detected. 3. SDOP, short for \u2018Smart hand-held Device for Over-puncture Prevention\u2019, integrates a latch-based disengage mechanism to prevent over-puncture during PIVC. The user can keep the conventional way of operation and do the insertion manually. At the moment of venipuncture, the device automatically activates the disengage mechanism to stop further advancement of the catheter. 4. CathBot represents \u2018hand-held roBot for peripheral intravenous Catheterization\u2019. The device uses a crank-slider mechanism and a solenoid actuator to convert the complicated intravenous catheterization motion to a simple linear forward motion. The user just needs to push the device\u2019s handle forwards and the device completes the whole PIVC insertion procedure automatically. All the devices were characterized to ensure they can satisfy the design specifications. Then a series of comparative experiments were conducted to assess each of them. In the first experiment, 25 na\uefve subjects were invited to perform 10 trials of PIVC on a realistic baby arm phantom. The subjects were divided into 5 groups, and each group was asked to do the PIVC with one device only (SVEI, SAID, SDOP, CathBot and regular iv catheter). The experimental results show that all devices can provide the needed assistance to significantly facilitate and improve the success rates compared to the conventional method. People who have no experience of PIVC operation before can achieve considerably high success rates in robot-assisted PIVC (86% with SVEI, 80% with SAID, 78% with SDOP and 84% with CathBot) compared to the control group (12%) who used a regular iv catheter. Also, all 5 subjects using SVEI, 3 out of 5 subjects using SAID, 2 out of 5 subjects using SDOP and 4 out of 5 subjects using CathBot were able to successfully catheterize the baby arm phantom on their first attempt, while no subjects in the control group succeeded in their first attempts. Since SVEI showed the best results, it was selected for the second round of evaluation. In the second experiment, clinicians including both PIVC experts and general clinicians were invited to perform PIVC on a realistic baby arm phantom with 3 trials using SVEI and 3 trials in the conventional way. The results demonstrate that SVEI can bring great benefits to both specialists and general clinicians. The average success rates were found to be significantly improved from 48.3% to 71.7% when SVEI was used. The experimental results reveal that all experts achieved better or equal results with SVEI compared to the conventional method, and 9 out of 12 non-experts also had better or equal performance when SVEI was used. Finally, subjective feedback acquired through post-trial questionnaires showed that all devices were highly rated in terms of usability. Overall, the results of this doctoral research support continued investment in the technology to bring the handheld robotic devices closer to clinical us

    Augmented reality based real-time subcutaneous vein imaging system

    Get PDF
    A novel 3D reconstruction and fast imaging system for subcutaneous veins by augmented reality is presented. The study was performed to reduce the failure rate and time required in intravenous injection by providing augmented vein structures that back-project superimposed veins on the skin surface of the hand. Images of the subcutaneous vein are captured by two industrial cameras with extra reflective near-infrared lights. The veins are then segmented by a multiple-feature clustering method. Vein structures captured by the two cameras are matched and reconstructed based on the epipolar constraint and homographic property. The skin surface is reconstructed by active structured light with spatial encoding values and fusion displayed with the reconstructed vein. The vein and skin surface are both reconstructed in the 3D space. Results show that the structures can be precisely back-projected to the back of the hand for further augmented display and visualization. The overall system performance is evaluated in terms of vein segmentation, accuracy of vein matching, feature points distance error, duration times, accuracy of skin reconstruction, and augmented display. All experiments are validated with sets of real vein data. The imaging and augmented system produces good imaging and augmented reality results with high speed

    Towards retrieving force feedback in robotic-assisted surgery: a supervised neuro-recurrent-vision approach

    Get PDF
    Robotic-assisted minimally invasive surgeries have gained a lot of popularity over conventional procedures as they offer many benefits to both surgeons and patients. Nonetheless, they still suffer from some limitations that affect their outcome. One of them is the lack of force feedback which restricts the surgeon's sense of touch and might reduce precision during a procedure. To overcome this limitation, we propose a novel force estimation approach that combines a vision based solution with supervised learning to estimate the applied force and provide the surgeon with a suitable representation of it. The proposed solution starts with extracting the geometry of motion of the heart's surface by minimizing an energy functional to recover its 3D deformable structure. A deep network, based on a LSTM-RNN architecture, is then used to learn the relationship between the extracted visual-geometric information and the applied force, and to find accurate mapping between the two. Our proposed force estimation solution avoids the drawbacks usually associated with force sensing devices, such as biocompatibility and integration issues. We evaluate our approach on phantom and realistic tissues in which we report an average root-mean square error of 0.02 N.Peer ReviewedPostprint (author's final draft

    Optically Sensorized Tendons for Articulate Robotic Needles

    Get PDF
    This study proposes an optically sensorized tendon composed of a 195 µm diameter, high strength, polarization maintaining (PM) fiber Bragg gratings (FBG) optical fiber which resolves the cross-sensitivity issue of conventional FBGs. The bare fiber tendon is locally reinforced with a 250 µm diameter Kevlar bundle enhancing the level of force transmission and enabling high curvature tendon routing. The performance of the sensorized tendons is explored in terms of strength (higher than 13N for the bare PM-FBG fiber tendon, up to 40N for the Kevlar-reinforced tendon under tensile loading), strain sensitivity (0.127 percent strain per newton for the bare PM-FBG fiber tendon, 0.04 percent strain per newton for the Kevlar-reinforced tendon), temperature stability, and friction-independent sensing behavior. Subsequently, the tendon is instrumented within an 18 Ga articulate NiTi cannula and evaluated under static and dynamic loading conditions, and within phantoms of varying stiffness for tissue-stiffness estimation. The results from this series of experiments serve to validate the effectiveness of the proposed tendon as a bi-modal sensing and actuation component for robot-assisted minimally invasive surgical instruments

    Ultrasound Guidance in Perioperative Care

    Get PDF

    Ultrasound Guidance in Perioperative Care

    Get PDF

    The Role of Visualization, Force Feedback, and Augmented Reality in Minimally Invasive Heart Valve Repair

    Get PDF
    New cardiovascular techniques have been developed to address the unique requirements of high risk, elderly, surgical patients with heart valve disease by avoiding both sternotomy and cardiopulmonary bypass. However, these technologies pose new challenges in visualization, force application, and intracardiac navigation. Force feedback and augmented reality (AR) can be applied to minimally invasive mitral valve repair and transcatheter aortic valve implantation (TAVI) techniques to potentially surmount these challenges. Our study demonstrated shorter operative times with three dimensional (3D) visualization compared to two dimensional (2D) visualization; however, both experts and novices applied significantly more force to cardiac tissue during 3D robotics-assisted mitral valve annuloplasty than during conventional open mitral valve annuloplasty. This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery. Subsequently, using an innovative robotics-assisted surgical system design, we determined that direct haptic feedback may improve both expert and trainee performance using robotics-assisted techniques. We determined that during robotics-assisted mitral valve annuloplasty the use of either visual or direct force feedback resulted in a significant decrease in forces applied to cardiac tissue when compared to robotics-assisted mitral valve annuloplasty without force feedback. We presented NeoNav, an AR-enhanced echocardiograpy intracardiac guidance system for NeoChord off-pump mitral valve repair. Our study demonstrated superior tool navigation accuracy, significantly shorter navigation times, and reduced potential for injury with AR enhanced intracardiac navigation for off-pump transapical mitral valve repair with neochordae implantation. In addition, we applied the NeoNav system as a safe and inexpensive alternative imaging modality for TAVI guidance. We found that our proposed AR guidance system may achieve similar or better results than the current standard of care, contrast enhanced fluoroscopy, while eliminating the use of nephrotoxic contrast and ionizing radiation. These results suggest that the addition of both force feedback and augmented reality image guidance can improve both surgical performance and safety during minimally invasive robotics assisted and beating heart valve surgery, respectively

    Phlebot: The Robotic Phlebotomist

    Get PDF
    Phlebotomy is a routine task, performed over a billion times annually in the United States alone, that is essential for proper diagnosis and treatment. We designed and constructed Phlebot, a robotic assistive device that uses near- infrared imaging and force-feedback to guide a needle into a forearm vein for blood sample collection or intravenous catheterization. Through initial validation on phantoms, we show that it is feasible to automate phlebotomy reliably. We envision the device to be a first major step towards more affordable point-of-care testing and diagnostic healthcare systems. In the long term, we expect that Phlebot will expedite healthcare delivery and drastically reduce needle stick injuries, instances of hemolysis, and infections caused by blood-borne pathogens
    • …
    corecore