6 research outputs found

    A sensorized modular training platform to reduce vascular damage in endovascular surgery

    Get PDF
    Purpose Endovascular interventions require intense practice to develop sufficient dexterity in catheter handling within the human body. Therefore, we present a modular training platform, featuring 3D-printed vessel phantoms with patient-specific anatomy and integrated piezoresistive impact force sensing of instrument interaction at clinically relevant locations for feedback-based skill training to detect and reduce damage to the delicate vascular wall. Methods The platform was fabricated and then evaluated in a user study by medical (n=10) and non-medical (n=10) users. The users had to navigate a set of guidewire and catheter through a parkour of 3 modules including an aneurismatic abdominal aorta, while impact force and completion time were recorded. Eventually, a questionnaire was conducted. Results The platform allowed to perform more than 100 runs in which it proved capable to distinguish between users of different experience levels. Medical experts in the fields of vascular and visceral surgery had a strong performance assessment on the platform. It could be shown, that medical students could improve runtime and impact over 5 runs. The platform was well received and rated as promising for medical education despite the experience of higher friction compared to real human vessels. Conclusion We investigated an authentic patient-specific training platform with integrated sensor-based feedback functionality for individual skill training in endovascular surgery. The presented method for phantom manufacturing is easily applicable to arbitrary patient-individual imaging data. Further work shall address the implementation of smaller vessel branches, as well as real-time feedback and camera imaging for further improved training experience

    Comparative validation of machine learning algorithms for surgical workflow and skill analysis with the HeiChole benchmark

    Get PDF
    Purpose: Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems. These systems could increase the safety of the operation through context-sensitive warnings and semi-autonomous robotic assistance or improve training of surgeons via data-driven feedback. In surgical workflow analysis up to 91% average precision has been reported for phase recognition on an open data single-center video dataset. In this work we investigated the generalizability of phase recognition algorithms in a multicenter setting including more difficult recognition tasks such as surgical action and surgical skill. Methods: To achieve this goal, a dataset with 33 laparoscopic cholecystectomy videos from three surgical centers with a total operation time of 22 h was created. Labels included framewise annotation of seven surgical phases with 250 phase transitions, 5514 occurences of four surgical actions, 6980 occurences of 21 surgical instruments from seven instrument categories and 495 skill classifications in five skill dimensions. The dataset was used in the 2019 international Endoscopic Vision challenge, sub-challenge for surgical workflow and skill analysis. Here, 12 research teams trained and submitted their machine learning algorithms for recognition of phase, action, instrument and/or skill assessment. Results: F1-scores were achieved for phase recognition between 23.9% and 67.7% (n = 9 teams), for instrument presence detection between 38.5% and 63.8% (n = 8 teams), but for action recognition only between 21.8% and 23.3% (n = 5 teams). The average absolute error for skill assessment was 0.78 (n = 1 team). Conclusion: Surgical workflow and skill analysis are promising technologies to support the surgical team, but there is still room for improvement, as shown by our comparison of machine learning algorithms. This novel HeiChole benchmark can be used for comparable evaluation and validation of future work. In future studies, it is of utmost importance to create more open, high-quality datasets in order to allow the development of artificial intelligence and cognitive robotics in surgery

    Deep learning for semantic segmentation of organs and tissues in laparoscopic surgery

    No full text
    Semantic segmentation of organs and tissue types is an important sub-problem in image based scene understanding for laparoscopic surgery and is a prerequisite for context-aware assistance and cognitive robotics. Deep Learning (DL) approaches are prominently applied to segmentation and tracking of laparoscopic instruments. This work compares different combinations of neural networks, loss functions, and training strategies in their application to semantic segmentation of different organs and tissue types in human laparoscopic images in order to investigate their applicability as components in cognitive systems. TernausNet-11 trained on Soft-Jaccard loss with a pretrained, trainable encoder performs best in regard to segmentation quality (78.31% mean Intersection over Union [IoU]) and inference time (28.07 ms) on a single GTX 1070 GPU
    corecore