166 research outputs found

    A learning robot for cognitive camera control in minimally invasive surgery

    Get PDF
    Background!#!We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic camera guidance, however following simple rules and not adapting their behavior to specific tasks, procedures, or surgeons.!##!Methods!#!The herein presented methodology allows different robot kinematics to perceive their environment, interpret it according to a knowledge base and perform context-aware actions. For training, twenty operations were conducted with human camera guidance by a single surgeon. Subsequently, we experimentally evaluated the cognitive robotic camera control. A VIKYย EP system and a KUKA LWR 4 robot were trained on data from manual camera guidance after completion of the surgeon's learning curve. Second, only data from VIKY EP were used to train the LWR and finally data from training with the LWR were used to re-train the LWR.!##!Results!#!The duration of each operation decreased with the robot's increasing experience from 1704ย sโ€‰ยฑโ€‰244ย s to 1406ย sโ€‰ยฑโ€‰112ย s, and 1197ย s. Camera guidance quality (good/neutral/poor) improved from 38.6/53.4/7.9 to 49.4/46.3/4.1% and 56.2/41.0/2.8%.!##!Conclusions!#!The cognitive camera robot improved its performance with experience, laying the foundation for a new generation of cognitive surgical robots that adapt to a surgeon's needs

    Multicamera 3D Viewpoint Adjustment for Robotic Surgery via Deep Reinforcement Learning

    Get PDF
    While robot-assisted minimally invasive surgery (RMIS) procedures afford a variety of benefits over open surgery and manual laparoscopic operations (including increased tool dexterity, reduced patient pain, incision size, trauma and recovery time, and lower infection rates [1], lack of spatial awareness remains an issue. Typical laparoscopic imaging can lack sufficient depth cues and haptic feedback, if provided, rarely reflects realistic tissue-tool interactions. This work is part of a larger ongoing research effort to reconstruct 3D surfaces using multiple viewpoints in RMIS to increase visual perception. The manual placement and adjustment of multicamera systems in RMIS are nonideal and prone to error [2], and other autonomous approaches focus on tool tracking and do not consider reconstruction of the surgical scene [3,4,5]. The group\u27s previous work investigated a novel, context-aware autonomous camera positioning method [6], which incorporated both tool location and scene coverage for multiple camera viewpoint adjustments. In this paper, the authors expand upon this prior work by implementing a streamlined deep reinforcement learning approach between optimal viewpoints calculated using the prior method [6] which encourages discovery of otherwise unobserved and additional camera viewpoints. Combining the framework and robustness of the previous work with the efficiency and additional viewpoints of the augmentations presented here results in improved performance and scene coverage promising towards real-time implementation

    Artificial intelligence and automation in endoscopy and surgery

    Get PDF
    Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patientโ€™s anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery

    ์ž„์ƒ์ˆ ๊ธฐ ํ–ฅ์ƒ์„ ์œ„ํ•œ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฒ• ์—ฐ๊ตฌ: ๋Œ€์žฅ๋‚ด์‹œ๊ฒฝ ์ง„๋‹จ ๋ฐ ๋กœ๋ด‡์ˆ˜์ˆ  ์ˆ ๊ธฐ ํ‰๊ฐ€์— ์ ์šฉ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ํ˜‘๋™๊ณผ์ • ์˜์šฉ์ƒ์ฒด๊ณตํ•™์ „๊ณต, 2020. 8. ๊น€ํฌ์ฐฌ.This paper presents deep learning-based methods for improving performance of clinicians. Novel methods were applied to the following two clinical cases and the results were evaluated. In the first study, a deep learning-based polyp classification algorithm for improving clinical performance of endoscopist during colonoscopy diagnosis was developed. Colonoscopy is the main method for diagnosing adenomatous polyp, which can multiply into a colorectal cancer and hyperplastic polyps. The classification algorithm was developed using convolutional neural network (CNN), trained with colorectal polyp images taken by a narrow-band imaging colonoscopy. The proposed method is built around an automatic machine learning (AutoML) which searches for the optimal architecture of CNN for colorectal polyp image classification and trains the weights of the architecture. In addition, gradient-weighted class activation mapping technique was used to overlay the probabilistic basis of the prediction result on the polyp location to aid the endoscopists visually. To verify the improvement in diagnostic performance, the efficacy of endoscopists with varying proficiency levels were compared with or without the aid of the proposed polyp classification algorithm. The results confirmed that, on average, diagnostic accuracy was improved and diagnosis time was shortened in all proficiency groups significantly. In the second study, a surgical instruments tracking algorithm for robotic surgery video was developed, and a model for quantitatively evaluating the surgeons surgical skill based on the acquired motion information of the surgical instruments was proposed. The movement of surgical instruments is the main component of evaluation for surgical skill. Therefore, the focus of this study was develop an automatic surgical instruments tracking algorithm, and to overcome the limitations presented by previous methods. The instance segmentation framework was developed to solve the instrument occlusion issue, and a tracking framework composed of a tracker and a re-identification algorithm was developed to maintain the type of surgical instruments being tracked in the video. In addition, algorithms for detecting the tip position of instruments and arm-indicator were developed to acquire the movement of devices specialized for the robotic surgery video. The performance of the proposed method was evaluated by measuring the difference between the predicted tip position and the ground truth position of the instruments using root mean square error, area under the curve, and Pearsons correlation analysis. Furthermore, motion metrics were calculated from the movement of surgical instruments, and a machine learning-based robotic surgical skill evaluation model was developed based on these metrics. These models were used to evaluate clinicians, and results were similar in the developed evaluation models, the Objective Structured Assessment of Technical Skill (OSATS), and the Global Evaluative Assessment of Robotic Surgery (GEARS) evaluation methods. In this study, deep learning technology was applied to colorectal polyp images for a polyp classification, and to robotic surgery videos for surgical instruments tracking. The improvement in clinical performance with the aid of these methods were evaluated and verified.๋ณธ ๋…ผ๋ฌธ์€ ์˜๋ฃŒ์ง„์˜ ์ž„์ƒ์ˆ ๊ธฐ ๋Šฅ๋ ฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•˜์—ฌ ์ƒˆ๋กœ์šด ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฒ•๋“ค์„ ์ œ์•ˆํ•˜๊ณ  ๋‹ค์Œ ๋‘ ๊ฐ€์ง€ ์‹ค๋ก€์— ๋Œ€ํ•ด ์ ์šฉํ•˜์—ฌ ๊ทธ ๊ฒฐ๊ณผ๋ฅผ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ์ฒซ ๋ฒˆ์งธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋Œ€์žฅ๋‚ด์‹œ๊ฒฝ์œผ๋กœ ๊ด‘ํ•™ ์ง„๋‹จ ์‹œ, ๋‚ด์‹œ๊ฒฝ ์ „๋ฌธ์˜์˜ ์ง„๋‹จ ๋Šฅ๋ ฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•˜์—ฌ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ์šฉ์ข… ๋ถ„๋ฅ˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜๊ณ , ๋‚ด์‹œ๊ฒฝ ์ „๋ฌธ์˜์˜ ์ง„๋‹จ ๋Šฅ๋ ฅ ํ–ฅ์ƒ ์—ฌ๋ถ€๋ฅผ ๊ฒ€์ฆํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ๋Œ€์žฅ๋‚ด์‹œ๊ฒฝ ๊ฒ€์‚ฌ๋กœ ์•”์ข…์œผ๋กœ ์ฆ์‹ํ•  ์ˆ˜ ์žˆ๋Š” ์„ ์ข…๊ณผ ๊ณผ์ฆ์‹์„ฑ ์šฉ์ข…์„ ์ง„๋‹จํ•˜๋Š” ๊ฒƒ์€ ์ค‘์š”ํ•˜๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ํ˜‘๋Œ€์—ญ ์˜์ƒ ๋‚ด์‹œ๊ฒฝ์œผ๋กœ ์ดฌ์˜ํ•œ ๋Œ€์žฅ ์šฉ์ข… ์˜์ƒ์œผ๋กœ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง์„ ํ•™์Šตํ•˜์—ฌ ๋ถ„๋ฅ˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ์ œ์•ˆํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์ž๋™ ๊ธฐ๊ณ„ํ•™์Šต (AutoML) ๋ฐฉ๋ฒ•์œผ๋กœ, ๋Œ€์žฅ ์šฉ์ข… ์˜์ƒ์— ์ตœ์ ํ™”๋œ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๊ตฌ์กฐ๋ฅผ ์ฐพ๊ณ  ์‹ ๊ฒฝ๋ง์˜ ๊ฐ€์ค‘์น˜๋ฅผ ํ•™์Šตํ•˜์˜€๋‹ค. ๋˜ํ•œ ๊ธฐ์šธ๊ธฐ-๊ฐ€์ค‘์น˜ ํด๋ž˜์Šค ํ™œ์„ฑํ™” ๋งตํ•‘ ๊ธฐ๋ฒ•์„ ์ด์šฉํ•˜์—ฌ ๊ฐœ๋ฐœํ•œ ํ•ฉ์„ฑ๊ณฑ ์‹ ๊ฒฝ๋ง ๊ฒฐ๊ณผ์˜ ํ™•๋ฅ ์  ๊ทผ๊ฑฐ๋ฅผ ์šฉ์ข… ์œ„์น˜์— ์‹œ๊ฐ์ ์œผ๋กœ ๋‚˜ํƒ€๋‚˜๋„๋ก ํ•จ์œผ๋กœ ๋‚ด์‹œ๊ฒฝ ์ „๋ฌธ์˜์˜ ์ง„๋‹จ์„ ๋•๋„๋ก ํ•˜์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ˆ™๋ จ๋„ ๊ทธ๋ฃน๋ณ„๋กœ ๋‚ด์‹œ๊ฒฝ ์ „๋ฌธ์˜๊ฐ€ ์šฉ์ข… ๋ถ„๋ฅ˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๊ฒฐ๊ณผ๋ฅผ ์ฐธ๊ณ ํ•˜์˜€์„ ๋•Œ ์ง„๋‹จ ๋Šฅ๋ ฅ์ด ํ–ฅ์ƒ๋˜์—ˆ๋Š”์ง€ ๋น„๊ต ์‹คํ—˜์„ ์ง„ํ–‰ํ•˜์˜€๊ณ , ๋ชจ๋“  ๊ทธ๋ฃน์—์„œ ์œ ์˜๋ฏธํ•˜๊ฒŒ ์ง„๋‹จ ์ •ํ™•๋„๊ฐ€ ํ–ฅ์ƒ๋˜๊ณ  ์ง„๋‹จ ์‹œ๊ฐ„์ด ๋‹จ์ถ•๋˜์—ˆ์Œ์„ ํ™•์ธํ•˜์˜€๋‹ค. ๋‘ ๋ฒˆ์งธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋กœ๋ด‡์ˆ˜์ˆ  ๋™์˜์ƒ์—์„œ ์ˆ˜์ˆ ๋„๊ตฌ ์œ„์น˜ ์ถ”์  ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜๊ณ , ํš๋“ํ•œ ์ˆ˜์ˆ ๋„๊ตฌ์˜ ์›€์ง์ž„ ์ •๋ณด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์ˆ˜์ˆ ์ž์˜ ์ˆ™๋ จ๋„๋ฅผ ์ •๋Ÿ‰์ ์œผ๋กœ ํ‰๊ฐ€ํ•˜๋Š” ๋ชจ๋ธ์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ˆ˜์ˆ ๋„๊ตฌ์˜ ์›€์ง์ž„์€ ์ˆ˜์ˆ ์ž์˜ ๋กœ๋ด‡์ˆ˜์ˆ  ์ˆ™๋ จ๋„๋ฅผ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•œ ์ฃผ์š”ํ•œ ์ •๋ณด์ด๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ๋Š” ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ์ž๋™ ์ˆ˜์ˆ ๋„๊ตฌ ์ถ”์  ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜์˜€์œผ๋ฉฐ, ๋‹ค์Œ ๋‘๊ฐ€์ง€ ์„ ํ–‰์—ฐ๊ตฌ์˜ ํ•œ๊ณ„์ ์„ ๊ทน๋ณตํ•˜์˜€๋‹ค. ์ธ์Šคํ„ด์Šค ๋ถ„ํ•  (Instance Segmentation) ํ”„๋ ˆ์ž„์›์„ ๊ฐœ๋ฐœํ•˜์—ฌ ํ์ƒ‰ (Occlusion) ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜์˜€๊ณ , ์ถ”์ ๊ธฐ (Tracker)์™€ ์žฌ์‹๋ณ„ํ™” (Re-Identification) ์•Œ๊ณ ๋ฆฌ์ฆ˜์œผ๋กœ ๊ตฌ์„ฑ๋œ ์ถ”์  ํ”„๋ ˆ์ž„์›์„ ๊ฐœ๋ฐœํ•˜์—ฌ ๋™์˜์ƒ์—์„œ ์ถ”์ ํ•˜๋Š” ์ˆ˜์ˆ ๋„๊ตฌ์˜ ์ข…๋ฅ˜๊ฐ€ ์œ ์ง€๋˜๋„๋ก ํ•˜์˜€๋‹ค. ๋˜ํ•œ ๋กœ๋ด‡์ˆ˜์ˆ  ๋™์˜์ƒ์˜ ํŠน์ˆ˜์„ฑ์„ ๊ณ ๋ คํ•˜์—ฌ ์ˆ˜์ˆ ๋„๊ตฌ์˜ ์›€์ง์ž„์„ ํš๋“ํ•˜๊ธฐ์œ„ํ•ด ์ˆ˜์ˆ ๋„๊ตฌ ๋ ์œ„์น˜์™€ ๋กœ๋ด‡ ํŒ”-์ธ๋””์ผ€์ดํ„ฐ (Arm-Indicator) ์ธ์‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ์ œ์•ˆํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์„ฑ๋Šฅ์€ ์˜ˆ์ธกํ•œ ์ˆ˜์ˆ ๋„๊ตฌ ๋ ์œ„์น˜์™€ ์ •๋‹ต ์œ„์น˜ ๊ฐ„์˜ ํ‰๊ท  ์ œ๊ณฑ๊ทผ ์˜ค์ฐจ, ๊ณก์„  ์•„๋ž˜ ๋ฉด์ , ํ”ผ์–ด์Šจ ์ƒ๊ด€๋ถ„์„์œผ๋กœ ํ‰๊ฐ€ํ•˜์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์ˆ˜์ˆ ๋„๊ตฌ์˜ ์›€์ง์ž„์œผ๋กœ๋ถ€ํ„ฐ ์›€์ง์ž„ ์ง€ํ‘œ๋ฅผ ๊ณ„์‚ฐํ•˜๊ณ  ์ด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๊ธฐ๊ณ„ํ•™์Šต ๊ธฐ๋ฐ˜์˜ ๋กœ๋ด‡์ˆ˜์ˆ  ์ˆ™๋ จ๋„ ํ‰๊ฐ€ ๋ชจ๋ธ์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ๊ฐœ๋ฐœํ•œ ํ‰๊ฐ€ ๋ชจ๋ธ์€ ๊ธฐ์กด์˜ Objective Structured Assessment of Technical Skill (OSATS), Global Evaluative Assessment of Robotic Surgery (GEARS) ํ‰๊ฐ€ ๋ฐฉ๋ฒ•๊ณผ ์œ ์‚ฌํ•œ ์„ฑ๋Šฅ์„ ๋ณด์ž„์„ ํ™•์ธํ•˜์˜€๋‹ค. ๋ณธ ๋…ผ๋ฌธ์€ ์˜๋ฃŒ์ง„์˜ ์ž„์ƒ์ˆ ๊ธฐ ๋Šฅ๋ ฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•˜์—ฌ ๋Œ€์žฅ ์šฉ์ข… ์˜์ƒ๊ณผ ๋กœ๋ด‡์ˆ˜์ˆ  ๋™์˜์ƒ์— ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ์ˆ ์„ ์ ์šฉํ•˜๊ณ  ๊ทธ ์œ ํšจ์„ฑ์„ ํ™•์ธํ•˜์˜€์œผ๋ฉฐ, ํ–ฅํ›„์— ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ์ž„์ƒ์—์„œ ์‚ฌ์šฉ๋˜๊ณ  ์žˆ๋Š” ์ง„๋‹จ ๋ฐ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•์˜ ๋Œ€์•ˆ์ด ๋  ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€ํ•œ๋‹ค.Chapter 1 General Introduction 1 1.1 Deep Learning for Medical Image Analysis 1 1.2 Deep Learning for Colonoscipic Diagnosis 2 1.3 Deep Learning for Robotic Surgical Skill Assessment 3 1.4 Thesis Objectives 5 Chapter 2 Optical Diagnosis of Colorectal Polyps using Deep Learning with Visual Explanations 7 2.1 Introduction 7 2.1.1 Background 7 2.1.2 Needs 8 2.1.3 Related Work 9 2.2 Methods 11 2.2.1 Study Design 11 2.2.2 Dataset 14 2.2.3 Preprocessing 17 2.2.4 Convolutional Neural Networks (CNN) 21 2.2.4.1 Standard CNN 21 2.2.4.2 Search for CNN Architecture 22 2.2.4.3 Searched CNN Training 23 2.2.4.4 Visual Explanation 24 2.2.5 Evaluation of CNN and Endoscopist Performances 25 2.3 Experiments and Results 27 2.3.1 CNN Performance 27 2.3.2 Results of Visual Explanation 31 2.3.3 Endoscopist with CNN Performance 33 2.4 Discussion 45 2.4.1 Research Significance 45 2.4.2 Limitations 47 2.5 Conclusion 49 Chapter 3 Surgical Skill Assessment during Robotic Surgery by Deep Learning-based Surgical Instrument Tracking 50 3.1 Introduction 50 3.1.1 Background 50 3.1.2 Needs 51 3.1.3 Related Work 52 3.2 Methods 56 3.2.1 Study Design 56 3.2.2 Dataset 59 3.2.3 Instance Segmentation Framework 63 3.2.4 Tracking Framework 66 3.2.4.1 Tracker 66 3.2.4.2 Re-identification 68 3.2.5 Surgical Instrument Tip Detection 69 3.2.6 Arm-Indicator Recognition 71 3.2.7 Surgical Skill Prediction Model 71 3.3 Experiments and Results 78 3.3.1 Performance of Instance Segmentation Framework 78 3.3.2 Performance of Tracking Framework 82 3.3.3 Evaluation of Surgical Instruments Trajectory 83 3.3.4 Evaluation of Surgical Skill Prediction Model 86 3.4 Discussion 90 3.4.1 Research Significance 90 3.4.2 Limitations 92 3.5 Conclusion 96 Chapter 4 Summary and Future Works 97 4.1 Thesis Summary 97 4.2 Limitations and Future Works 98 Bibliography 100 Abstract in Korean 116 Acknowledgement 119Docto

    Artificial Intelligence for Emerging Technology in Surgery: Systematic Review and Validation

    Get PDF
    Surgery is a high-risk procedure of therapy and is associated to post trauma complications of longer hospital stay, estimated blood loss and long duration of surgeries. Reports have suggested that over 2.5% patients die during and post operation. This paper is aimed at systematic review of previous research on artificial intelligence (AI) in surgery, analyzing their results with suitable software to validate their research by obtaining same or contrary results. Six published research articles have been reviewed across three continents. These articles have been re-validated using software including SPSS and MedCalc to obtain the statistical features such as the mean, standard deviation, significant level, and standard error. From the significant values, the experiments are then classified according to the null (p0.05) hypotheses. The results obtained from the analysis have suggested significant difference in operating time, docking time, staging time, and estimated blood loss but show no significant difference in length of hospital stay, recovery time and lymph nodes harvested between robotic assisted surgery using AI and normal conventional surgery. From the evaluations, this research suggests that AI-assisted surgery improves over the conventional surgery as safer and more efficient system of surgery with minimal or no complications

    Deep Homography Prediction for Endoscopic Camera Motion Imitation Learning

    Full text link
    In this work, we investigate laparoscopic camera motion automation through imitation learning from retrospective videos of laparoscopic interventions. A novel method is introduced that learns to augment a surgeon's behavior in image space through object motion invariant image registration via homographies. Contrary to existing approaches, no geometric assumptions are made and no depth information is necessary, enabling immediate translation to a robotic setup. Deviating from the dominant approach in the literature which consist of following a surgical tool, we do not handcraft the objective and no priors are imposed on the surgical scene, allowing the method to discover unbiased policies. In this new research field, significant improvements are demonstrated over two baselines on the Cholec80 and HeiChole datasets, showcasing an improvement of 47% over camera motion continuation. The method is further shown to indeed predict camera motion correctly on the public motion classification labels of the AutoLaparo dataset. All code is made accessible on GitHub.Comment: Early accepted at MICCAI 202

    Soft Robot-Assisted Minimally Invasive Surgery and Interventions: Advances and Outlook

    Get PDF
    Since the emergence of soft robotics around two decades ago, research interest in the field has escalated at a pace. It is fuelled by the industry's appreciation of the wide range of soft materials available that can be used to create highly dexterous robots with adaptability characteristics far beyond that which can be achieved with rigid component devices. The ability, inherent in soft robots, to compliantly adapt to the environment, has significantly sparked interest from the surgical robotics community. This article provides an in-depth overview of recent progress and outlines the remaining challenges in the development of soft robotics for minimally invasive surgery

    Gesture Recognition and Control for Semi-Autonomous Robotic Assistant Surgeons

    Get PDF
    The next stage for robotics development is to introduce autonomy and cooperation with human agents in tasks that require high levels of precision and/or that exert considerable physical strain. To guarantee the highest possible safety standards, the best approach is to devise a deterministic automaton that performs identically for each operation. Clearly, such approach inevitably fails to adapt itself to changing environments or different human companions. In a surgical scenario, the highest variability happens for the timing of different actions performed within the same phases. This thesis explores the solutions adopted in pursuing automation in robotic minimally-invasive surgeries (R-MIS) and presents a novel cognitive control architecture that uses a multi-modal neural network trained on a cooperative task performed by human surgeons and produces an action segmentation that provides the required timing for actions while maintaining full phase execution control via a deterministic Supervisory Controller and full execution safety by a velocity-constrained Model-Predictive Controller

    25th International Congress of the European Association for Endoscopic Surgery (EAES) Frankfurt, Germany, 14-17 June 2017 : Oral Presentations

    Get PDF
    Introduction: Ouyang has recently proposed hiatal surface area (HSA) calculation by multiplanar multislice computer tomography (MDCT) scan as a useful tool for planning treatment of hiatus defects with hiatal hernia (HH), with or without gastroesophageal reflux (MRGE). Preoperative upper endoscopy or barium swallow cannot predict the HSA and pillars conditions. Aim to asses the efficacy of MDCTโ€™s calculation of HSA for planning the best approach for the hiatal defects treatment. Methods: We retrospectively analyzed 25 patients, candidates to laparoscopic antireflux surgery as primary surgery or hiatus repair concomitant with or after bariatric surgery. Patients were analyzed preoperatively and after one-year follow-up by MDCT scan measurement of esophageal hiatus surface. Five normal patients were enrolled as control group. The HSAโ€™s intraoperative calculation was performed after complete dissection of the area considered a triangle. Postoperative CT-scan was done after 12 months or any time reflux symptoms appeared. Results: (1) Mean HSA in control patients with no HH, no MRGE was cm2 and similar in non-complicated patients with previous LSG and cruroplasty. (2) Mean HSA in patients candidates to cruroplasty was 7.40 cm2. (3) Mean HSA in patients candidates to redo cruroplasty for recurrence was 10.11 cm2. Discussion. MDCT scan offer the possibility to obtain an objective measurement of the HSA and the correlation with endoscopic findings and symptoms. The preoperative information allow to discuss with patients the proper technique when a HSA[5 cm2 is detected. During the follow-up a correlation between symptoms and failure of cruroplasty can be assessed. Conclusions: MDCT scan seems to be an effective non-invasive method to plan hiatal defect treatment and to check during the follow-up the potential recurrence. Future research should correlate in larger series imaging data with intraoperative findings
    • โ€ฆ
    corecore