419 research outputs found

    Design and realization of a master-slave system for reconstructive microsurgery

    Get PDF

    Medical Robotics

    Get PDF
    The first generation of surgical robots are already being installed in a number of operating rooms around the world. Robotics is being introduced to medicine because it allows for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, robots have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. The use of robotics in surgery will expand over the next decades without any doubt. Minimally Invasive Surgery (MIS) is a revolutionary approach in surgery. In MIS, the operation is performed with instruments and viewing equipment inserted into the body through small incisions created by the surgeon, in contrast to open surgery with large incisions. This minimizes surgical trauma and damage to healthy tissue, resulting in shorter patient recovery time. The aim of this book is to provide an overview of the state-of-art, to present new ideas, original results and practical experiences in this expanding area. Nevertheless, many chapters in the book concern advanced research on this growing area. The book provides critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies. This book is certainly a small sample of the research activity on Medical Robotics going on around the globe as you read it, but it surely covers a good deal of what has been done in the field recently, and as such it works as a valuable source for researchers interested in the involved subjects, whether they are currently “medical roboticists” or not

    Task Dynamics of Prior Training Influence Visual Force Estimation Ability During Teleoperation

    Full text link
    The lack of haptic feedback in Robot-assisted Minimally Invasive Surgery (RMIS) is a potential barrier to safe tissue handling during surgery. Bayesian modeling theory suggests that surgeons with experience in open or laparoscopic surgery can develop priors of tissue stiffness that translate to better force estimation abilities during RMIS compared to surgeons with no experience. To test if prior haptic experience leads to improved force estimation ability in teleoperation, 33 participants were assigned to one of three training conditions: manual manipulation, teleoperation with force feedback, or teleoperation without force feedback, and learned to tension a silicone sample to a set of force values. They were then asked to perform the tension task, and a previously unencountered palpation task, to a different set of force values under teleoperation without force feedback. Compared to the teleoperation groups, the manual group had higher force error in the tension task outside the range of forces they had trained on, but showed better speed-accuracy functions in the palpation task at low force levels. This suggests that the dynamics of the training modality affect force estimation ability during teleoperation, with the prior haptic experience accessible if formed under the same dynamics as the task.Comment: 12 pages, 8 figure

    Haptics-Enabled Teleoperation for Robotics-Assisted Minimally Invasive Surgery

    Get PDF
    The lack of force feedback (haptics) in robotic surgery can be considered to be a safety risk leading to accidental tissue damage and puncturing of blood vessels due to excessive forces being applied to tissue and vessels or causing inefficient control over the instruments because of insufficient applied force. This project focuses on providing a satisfactory solution for introducing haptic feedback in robotics-assisted minimally invasive surgical (RAMIS) systems. The research addresses several key issues associated with the incorporation of haptics in a master-slave (teleoperated) robotic environment for minimally invasive surgery (MIS). In this project, we designed a haptics-enabled dual-arm (two masters - two slaves) robotic MIS testbed to investigate and validate various single-arm as well as dual-arm teleoperation scenarios. The most important feature of this setup is the capability of providing haptic feedback in all 7 degrees of freedom (DOF) required for RAMIS (3 translations, 3 rotations and pinch motion of the laparoscopic tool). The setup also enables the evaluation of the effect of replacing haptic feedback by other sensory cues such as visual representation of haptic information (sensory substitution) and the hypothesis that surgical outcomes may be improved by substituting or augmenting haptic feedback by such sensory cues

    Robot Assisted Object Manipulation for Minimally Invasive Surgery

    Get PDF
    Robotic systems have an increasingly important role in facilitating minimally invasive surgical treatments. In robot-assisted minimally invasive surgery, surgeons remotely control instruments from a console to perform operations inside the patient. However, despite the advanced technological status of surgical robots, fully autonomous systems, with decision-making capabilities, are not yet available. In 2017, a structure to classify the research efforts toward autonomy achievable with surgical robots was proposed by Yang et al. Six different levels were identified: no autonomy, robot assistance, task autonomy, conditional autonomy, high autonomy, and full autonomy. All the commercially available platforms in robot-assisted surgery is still in level 0 (no autonomy). Despite increasing the level of autonomy remains an open challenge, its adoption could potentially introduce multiple benefits, such as decreasing surgeons’ workload and fatigue and pursuing a consistent quality of procedures. Ultimately, allowing the surgeons to interpret the ample and intelligent information from the system will enhance the surgical outcome and positively reflect both on patients and society. Three main aspects are required to introduce automation into surgery: the surgical robot must move with high precision, have motion planning capabilities and understand the surgical scene. Besides these main factors, depending on the type of surgery, there could be other aspects that might play a fundamental role, to name some compliance, stiffness, etc. This thesis addresses three technological challenges encountered when trying to achieve the aforementioned goals, in the specific case of robot-object interaction. First, how to overcome the inaccuracy of cable-driven systems when executing fine and precise movements. Second, planning different tasks in dynamically changing environments. Lastly, how the understanding of a surgical scene can be used to solve more than one manipulation task. To address the first challenge, a control scheme relying on accurate calibration is implemented to execute the pick-up of a surgical needle. Regarding the planning of surgical tasks, two approaches are explored: one is learning from demonstration to pick and place a surgical object, and the second is using a gradient-based approach to trigger a smoother object repositioning phase during intraoperative procedures. Finally, to improve scene understanding, this thesis focuses on developing a simulation environment where multiple tasks can be learned based on the surgical scene and then transferred to the real robot. Experiments proved that automation of the pick and place task of different surgical objects is possible. The robot was successfully able to autonomously pick up a suturing needle, position a surgical device for intraoperative ultrasound scanning and manipulate soft tissue for intraoperative organ retraction. Despite automation of surgical subtasks has been demonstrated in this work, several challenges remain open, such as the capabilities of the generated algorithm to generalise over different environment conditions and different patients

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    Force-Sensing-Based Multi-Platform Robotic Assistance for Vitreoretinal Surgery

    Get PDF
    Vitreoretinal surgery aims to treat disorders of the retina, vitreous body, and macula, such as retinal detachment, diabetic retinopathy, macular hole, epiretinal membrane and retinal vein occlusion. Challenged by several technical and human limitations, vitreoretinal practice currently ranks amongst the most demanding fields in ophthalmic surgery. Of vitreoretinal procedures, membrane peeling is the most common to be performed, over 0.5 million times annually, and among the most prone to complications. It requires an extremely delicate tissue manipulation by various micron scale maneuvers near the retina despite the physiological hand tremor of the operator. In addition, to avoid injuries, the applied forces on the retina need to be kept at a very fine level, which is often well below the tactile sensory threshold of the surgeon. Retinal vein cannulation is another demanding procedure where therapeutic agents are injected into occluded retinal veins. The feasibility of this treatment is limited due to challenges in identifying the moment of venous puncture, achieving cannulation and maintaining it throughout the drug delivery period. Recent advancements in medical robotics have significant potential to address most of the challenges in vitreoretinal practice, and therefore to prevent traumas, lessen complications, minimize intra-operative surgeon effort, maximize surgeon comfort, and promote patient safety. This dissertation presents the development of novel force-sensing tools that can easily be used on various robotic platforms, and robot control methods to produce integrated assistive surgical systems that work in partnership with surgeons against the current limitations in vitreoretinal surgery, specifically focusing on membrane peeling and vein cannulation procedures. Integrating high sensitivity force sensing into the ophthalmic instruments enables precise quantitative monitoring of applied forces. Auditory feedback based upon the measured forces can inform (and warn) the surgeon quickly during the surgery and help prevent injury due to excessive forces. Using these tools on a robotic platform can attenuate hand tremor of the surgeon, which effectively promotes tool manipulation accuracy. In addition, based upon certain force signatures, the robotic system can precisely identify critical instants, such as the venous puncture in retinal vein cannulation, and actively guide the tool towards clinical targets, compensate any involuntary motion of the surgeon, or generate additional motion that will make the surgical task easier. The experimental results using two distinct robotic platforms, the Steady-Hand Eye Robot and Micron, in combination with the force-sensing ophthalmic instruments, show significant performance improvement in artificial dry phantoms and ex vivo biological tissues

    Micro-motion controller

    Get PDF
    Micro-motions in surgical applications are small motions in the range of a few millimeters and are common in ophthalmic surgery, neurosurgery, and other surgeries which require precise manipulation over short distances. Robotic surgery is replacing traditional open surgery at a rapid pace due to the obvious health benefits, however, most of the robotic surgical tools use robotic motion controllers that are designed to work over a large portion of the human body, thus involving motion of the entire human arm at shoulder joint. This requirement to move a large inertial mass results in undesirable, unwanted, and imprecise motion. This senior design project has created a 2-axis micro-motion “capable” platform, where the device studies the most common linear, 2-D surgical micro-motion of pinched human fingers in a damped and un-damped state. Through a system of printed and modeled parts in combination with motors and encoders a microsurgical controller was developed which can provide location-based output on a screen. Mechanical damping was introduced to research potential stability of micro-motion in any surgeon’s otherwise unsteady hand. The device is to also serve as a starter set for future biomedical device research projects in Santa Clara University’s bioengineering department. Further developments in the microsurgical controller such as further scaling, addition of a third axis, haptic feedback through the microcontroller, and component encasing to allow productization for use on an industrial robotic surgical device for clinical applications
    • …
    corecore