185 research outputs found

    An Asynchronous Simulation Framework for Multi-User Interactive Collaboration: Application to Robot-Assisted Surgery

    Get PDF
    The field of surgery is continually evolving as there is always room for improvement in the post-operative health of the patient as well as the comfort of the Operating Room (OR) team. While the success of surgery is contingent upon the skills of the surgeon and the OR team, the use of specialized robots has shown to improve surgery-related outcomes in some cases. These outcomes are currently measured using a wide variety of metrics that include patient pain and recovery, surgeon’s comfort, duration of the operation and the cost of the procedure. There is a need for additional research to better understand the optimal criteria for benchmarking surgical performance. Presently, surgeons are trained to perform robot-assisted surgeries using interactive simulators. However, in the absence of well-defined performance standards, these simulators focus primarily on the simulation of the operative scene and not the complexities associated with multiple inputs to a real-world surgical procedure. Because interactive simulators are typically designed for specific robots that perform a small number of tasks controlled by a single user, they are inflexible in terms of their portability to different robots and the inclusion of multiple operators (e.g., nurses, medical assistants). Additionally, while most simulators provide high-quality visuals, simplification techniques are often employed to avoid stability issues for physics computation, contact dynamics and multi-manual interaction. This study addresses the limitations of existing simulators by outlining various specifications required to develop techniques that mimic real-world interactions and collaboration. Moreover, this study focuses on the inclusion of distributed control, shared task allocation and assistive feedback -- through machine learning, secondary and tertiary operators -- alongside the primary human operator

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    Image guided robotic assistance for the diagnosis and treatment of tumor

    Get PDF
    The aim of this thesis is to demonstrate the feasibility and the potentiality of introduction of robotics and image guidance in the overall oncologic workflow, from the diagnosis to the treatment phase. The popularity of robotics in the operating room has grown in recent years. Currently the most popular systems is the da Vinci telemanipulator (Intuitive Surgical), it is based on a master-slave control, for minimally invasive surgery and it is used in several surgical fields such us urology, general, gynecology, cardiothoracic. An accurate study of this system, from a technological field of view, has been conducted addressing all drawbacks and advantages of this system. The da Vinci System creates an immersive operating environment for the surgeon by providing both high quality stereo visualization and a human-machine interface that directly connects the surgeon’s hands to the motion of the surgical tool tips inside the patient’s body. It has undoubted advantages for the surgeon work and for the patient health, at least for some interventions, while its very high costs leaves many doubts on its price benefit ratio. In the robotic surgery field many researchers are working on the optimization and miniaturization robots mechanic, while others are trying to obtain smart functionalities to realize robotic systems, that, “knowing” the patient anatomy from radiological images, can assists the surgeon in an active way. Regarding the second point, image guided systems can be useful to plan and to control medical robots motion and to provide the surgeon pre-operative and intra-operative images with augmented reality visualization to enhance his/her perceptual capacities and, as a consequence, to improve the quality of treatments. To demonstrate this thesis some prototypes has been designed, implemented and tested. The development of image guided medical devices, comprehensive of augmented reality, virtual navigation and robotic surgical features, requires to address several problems. The first ones are the choosing of the robotic platform and of the image source to employ. An industrial anthropomorphic arm has been used as testing platform. The idea of integrating industrial robot components in the clinical workflow has been supported by the da Vinci technical analysis. The algorithms and methods developed, regarding in particular robot calibration, based on literature theories and on an easily integration in the clinical scenario, can be adapted to each anthropomorphic arm. In this way this work can be integrated with light-weight robots, for industrial or clinical use, able to work in close contact to humans, which will become numerous in the early future. Regarding the medical image source, it has been decided to work with ultrasound imaging. Two-dimensional ultrasound imaging is widely used in clinical practice because is not dangerous for the patient, inexpensive, compact and it is a highly flexible imaging that allows users to study many anatomic structures. It is routinely used for diagnosis and as guidance in percutaneous treatments. However the use of 2D ultrasound imaging presents some disadvantages that require great ability of the user: it requires that the clinician mentally integrates many images to reconstruct a complete idea of the anatomy in 3D. Furthermore the freehand control of the probe make it difficult to individuate anatomic positions and orientations and probe repositioning to reach a particular location. To overcome these problems it has been developed an image guided system that fuse 2D US real time images with routinely CT or MRI 3D images, previously acquired from the patient, to enhance clinician orientation and probe guidance. The implemented algorithms for robot calibration and US image guidance has been used to realize two applications responding to specific clinical needs. The first one to speed up the execution of routinely and very recurrently procedures like percutaneous biopsy or ablation. The second one to improve a new completely non invasive type of treatment for solid tumors, the HIFU (High Intensity Focused Ultrasound). An ultrasound guided robotic system has been developed to assist the clinician to execute complicated biopsies, or percutaneous ablations, in particular for deep abdominal organs. It was developed an integrated system that provides the clinician two types of assistance: a mixed reality visualization allows accurate and easy planning of needle trajectory and target reaching verification; the robot arm equipped with a six-degree-of-freedom force sensor allows the precise positioning of the needle holder and allows the clinician to adjust, by means of a cooperative control, the planned trajectory to overcome needle deflection and target motion. The second application consists in an augmented reality navigation system for HIFU treatment. HIFU represents a completely non invasive method for treatment of solid tumors, hemostasis and other vascular features in human tissues. The technology for HIFU treatments is still evolving and the systems available on the market have some limitations and drawbacks. A disadvantage resulting from our experience with the machinery available in our hospital (JC200 therapeutic system Haifu (HIFU) by Tech Co., Ltd, Chongqing), which is similar to other analogous machines, is the long time required to perform the procedure due to the difficulty to find the target, using the remote motion of an ultrasound probe under the patient. This problem has been addressed developing an augmented reality navigation system to enhance US guidance during HIFU treatments allowing an easy target localization. The system was implemented using an additional free hand ultrasound probe coupled with a localizer and CT fused imaging. It offers a simple and an economic solution to an easy HIFU target localization. This thesis demonstrates the utility and usability of robots for diagnosis and treatment of the tumor, in particular the combination of automatic positioning and cooperative control allows the surgeon and the robot to work in synergy. Further the work demonstrates the feasibility and the potentiality of the use of a mixed reality navigation system to facilitate the target localization and consequently to reduce the times of sittings, to increase the number of possible diagnosis/treatments and to decrease the risk of potential errors. The proposed solutions for the integration of robotics and image guidance in the overall oncologic workflow, take into account current available technologies, traditional clinical procedures and cost minimization

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Image-guided port placement for minimally invasive cardiac surgery

    Get PDF
    Minimally invasive surgery is becoming popular for a number of interventions. Use of robotic surgical systems in coronary artery bypass intervention offers many benefits to patients, but is however limited by remaining challenges in port placement. Choosing the entry ports for the robotic tools has a large impact on the outcome of the surgery, and can be assisted by pre-operative planning and intra-operative guidance techniques. In this thesis, pre-operative 3D computed tomography (CT) imaging is used to plan minimally invasive robotic coronary artery bypass (MIRCAB) surgery. From a patient database, port placement optimization routines are implemented and validated. Computed port placement configurations approximated past expert chosen configurations with an error of 13.7 ±5.1 mm. Following optimization, statistical classification was used to assess patient candidacy for MIRCAB. Various pattern recognition techniques were used to predict MIRCAB success, and could be used in the future to reduce conversion rates to conventional open-chest surgery. Gaussian, Parzen window, and nearest neighbour classifiers all proved able to detect ‘candidate’ and ‘non-candidate’ MIRCAB patients. Intra-operative registration and laser projection of port placements was validated on a phantom and then evaluated in four patient cases. An image-guided laser projection system was developed to map port placement plans from pre-operative 3D images. Port placement mappings on the phantom setup were accurate with an error of 2.4 ± 0.4 mm. In the patient cases, projections remained within 1 cm of computed port positions. Misregistered port placement mappings in human trials were due mainly to the rigid-body registration assumption and can be improved by non-rigid techniques. Overall, this work presents an integrated approach for: 1) pre-operative port placement planning and classification of incoming MIRCAB patients; and 2) intra-operative guidance of port placement. Effective translation of these techniques to the clinic will enable MIRCAB as a more efficacious and accessible procedure

    Cable-driven parallel mechanisms for minimally invasive robotic surgery

    Get PDF
    Minimally invasive surgery (MIS) has revolutionised surgery by providing faster recovery times, less post-operative complications, improved cosmesis and reduced pain for the patient. Surgical robotics are used to further decrease the invasiveness of procedures, by using yet smaller and fewer incisions or using natural orifices as entry point. However, many robotic systems still suffer from technical challenges such as sufficient instrument dexterity and payloads, leading to limited adoption in clinical practice. Cable-driven parallel mechanisms (CDPMs) have unique properties, which can be used to overcome existing challenges in surgical robotics. These beneficial properties include high end-effector payloads, efficient force transmission and a large configurable instrument workspace. However, the use of CDPMs in MIS is largely unexplored. This research presents the first structured exploration of CDPMs for MIS and demonstrates the potential of this type of mechanism through the development of multiple prototypes: the ESD CYCLOPS, CDAQS, SIMPLE, neuroCYCLOPS and microCYCLOPS. One key challenge for MIS is the access method used to introduce CDPMs into the body. Three different access methods are presented by the prototypes. By focusing on the minimally invasive access method in which CDPMs are introduced into the body, the thesis provides a framework, which can be used by researchers, engineers and clinicians to identify future opportunities of CDPMs in MIS. Additionally, through user studies and pre-clinical studies, these prototypes demonstrate that this type of mechanism has several key advantages for surgical applications in which haptic feedback, safe automation or a high payload are required. These advantages, combined with the different access methods, demonstrate that CDPMs can have a key role in the advancement of MIS technology.Open Acces

    Laparoscope arm automatic positioning for robot-assisted surgery based on reinforcement learning

    Get PDF
    Compared with the traditional laparoscopic surgery, the preoperative planning of robot-assisted laparoscopic surgery is more complex and essential. Through the analysis of the surgical procedures and surgical environment, the laparoscope arm preoperative planning algorithm based on the artificial pneumoperitoneum model, lesion parametrization model is proposed, which ensures that the laparoscope arm satisfies both the distance principle and the direction principle. The algorithm is divided into two parts, including the optimum incision and the optimum angle of laparoscope entry, which makes the laparoscope provide a reasonable initial visual field. A set of parameters based on the actual situation is given to illustrate the algorithm flow in detail. The preoperative planning algorithm offers significant improvements in planning time and quality for robot-assisted laparoscopic surgery. The improved method which combines the preoperative planning algorithm with deep deterministic policy gradient algorithm is applied to laparoscope arm automatic positioning for the robot-assisted laparoscopic surgery. It takes a fixed-point position and lesion parameters as input, and outputs the optimum incision, the optimum angle and motor movements without kinematics. The proposed algorithm is verified through simulations with a virtual environment built by pyglet. The results validate the correctness, feasibility, and robustness of this approach.</p

    Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human–robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Optimal Port Placement And Automated Robotic Positioning For Instrumented Laparoscopic Biosensors

    Get PDF
    OPTIMAL SURGICAL PORT PLACEMENT AND AUTOMATED ROBOTIC POSITIONING FOR RAMAN AND OTHER BIOSENSORS by BRADY KING January 2011 Advisors: Dr. Abhilash Pandya, Dr. Darin Ellis, Dr. Le Yi Wang, and Dr. Greg Auner Major: Computer Engineering Degree: Doctor of Philosophy Medical biosensors can provide new information during minimally invasive and robotic surgical procedures. However, these biosensors have significant physical limitations that make it difficult to find optimal port locations and place them in vivo. This dissertation explores the application of robotics and virtual/augmented reality to biosensors to enable their optimal use in vivo. In the first study, human performance in the task of port placement was evaluated to determine if computer intervention and assistance was needed. Using a virtual surgical environment, we present a number of targets on one or more tissue surfaces. A human factors study was conducted with 20 subjects that analyzed the subject\u27s placement of a port with the goal of scanning as many targets as possible with a biosensor. The study showed performance to be less than optimal with significant degradation in several specific scenarios. In the second study, an automated intelligent port placement system for biosensor use was developed. Patient data was displayed in an environment in which a surgeon could indicate areas of interest. The system utilized biosensor physical limitations and provided the best port location from which the biosensor could reach the targets on a collision-free path. The study showed that it is possible to find an optimal port location for proper biosensor data capture. In the final study, a surgical robot was investigated for potential use in holding and positioning a biosensor in vivo. A full control suite was developed for an AESOP 1000, enabling the positioning of the biosensor without hand manipulation. It was found that the robot lacks the accuracy needed for proper biosensor utilization. Specific causes for the inaccuracies were identified for analysis and consideration in future robotic platforms. Overall, the results show that the application of medical robotics and virtual/augmented reality is able to overcome of the significant physical limitations inherent to biosensor design that currently limit their use in surgery. We conjecture that a complete system, with a more accurate robot, could be used in vivo. We believe that results taken from the individual studies will result in improvements to pre-operative port placement and robotic design

    Designing a robotic port system for laparo-endoscopic single-site surgery

    Get PDF
    Current research and development in the field of surgical interventions aim to reduce the invasiveness by using few incisions or natural orifices in the body to access the surgical site. Considering surgeries in the abdominal cavity, the Laparo-Endoscopic Single-site Surgery (LESS) can be performed through a single incision in the navel, reducing blood loss, post-operative trauma, and improving the cosmetic outcome. However, LESS results in less intuitive instrument control, impaired ergonomic, loss of depth and haptic perception, and restriction of instrument positioning by a single incision. Robot-assisted surgery addresses these shortcomings, by introducing highly articulated, flexible robotic instruments, ergonomic control consoles with 3D visualization, and intuitive instrument control algorithms. The flexible robotic instruments are usually introduced into the abdomen via a rigid straight port, such that the positioning of the tools and therefore the accessibility of anatomical structures is still constrained by the incision location. To address this limitation, articulated ports for LESS are proposed by recent research works. However, they focus on only a few aspects, which are relevant to the surgery, such that a design considering all requirements for LESS has not been proposed yet. This partially originates in the lack of anatomical data of specific applications. Further, no general design guidelines exist and only a few evaluation metrics are proposed. To target these challenges, this thesis focuses on the design of an articulated robotic port for LESS partial nephrectomy. A novel approach is introduced, acquiring the available abdominal workspace, integrated into the surgical workflow. Based on several generated patient datasets and developed metrics, design parameter optimization is conducted. Analyzing the surgical procedure, a comprehensive requirement list is established and applied to design a robotic system, proposing a tendon-driven continuum robot as the articulated port structure. Especially, the aspects of stiffening and sterile design are addressed. In various experimental evaluations, the reachability, the stiffness, and the overall design are evaluated. The findings identify layer jamming as the superior stiffening method. Further, the articulated port is proven to enhance the accessibility of anatomical structures and offer a patient and incision location independent design
    • 

    corecore