420 research outputs found

    Haptics in Robot-Assisted Surgery: Challenges and Benefits

    Get PDF
    Robotic surgery is transforming the current surgical practice, not only by improving the conventional surgical methods but also by introducing innovative robot-enhanced approaches that broaden the capabilities of clinicians. Being mainly of man-machine collaborative type, surgical robots are seen as media that transfer pre- and intra-operative information to the operator and reproduce his/her motion, with appropriate filtering, scaling, or limitation, to physically interact with the patient. The field, however, is far from maturity and, more critically, is still a subject of controversy in medical communities. Limited or absent haptic feedback is reputed to be among reasons that impede further spread of surgical robots. In this paper objectives and challenges of deploying haptic technologies in surgical robotics is discussed and a systematic review is performed on works that have studied the effects of providing haptic information to the users in major branches of robotic surgery. It has been tried to encompass both classical works and the state of the art approaches, aiming at delivering a comprehensive and balanced survey both for researchers starting their work in this field and for the experts

    Factors of Micromanipulation Accuracy and Learning

    No full text
    Micromanipulation refers to the manipulation under a microscope in order to perform delicate procedures. It is difficult for humans to manipulate objects accurately under a microscope due to tremor and imperfect perception, limiting performance. This project seeks to understand factors affecting accuracy in micromanipulation, and to propose strategies for learning improving accuracy. Psychomotor experiments were conducted using computer-controlled setups to determine how various feedback modalities and learning methods can influence micromanipulation performance. In a first experiment, static and motion accuracy of surgeons, medical students and non-medical students under different magniification levels and grip force settings were compared. A second experiment investigated whether the non-dominant hand placed close to the target can contribute to accurate pointing of the dominant hand. A third experiment tested a training strategy for micromanipulation using unstable dynamics to magnify motion error, a strategy shown to be decreasing deviation in large arm movements. Two virtual reality (VR) modules were then developed to train needle grasping and needle insertion tasks, two primitive tasks in a microsurgery suturing procedure. The modules provided the trainee with a visual display in stereoscopic view and information on their grip, tool position and angles. Using the VR module, a study examining effects of visual cues was conducted to train tool orientation. Results from these studies suggested that it is possible to learn and improve accuracy in micromanipulation using appropriate sensorimotor feedback and training

    Safe Haptics-enabled Patient-Robot Interaction for Robotic and Telerobotic Rehabilitation of Neuromuscular Disorders: Control Design and Analysis

    Get PDF
    Motivation: Current statistics show that the population of seniors and the incidence rate of age-related neuromuscular disorders are rapidly increasing worldwide. Improving medical care is likely to increase the survival rate but will result in even more patients in need of Assistive, Rehabilitation and Assessment (ARA) services for extended periods which will place a significant burden on the world\u27s healthcare systems. In many cases, the only alternative is limited and often delayed outpatient therapy. The situation will be worse for patients in remote areas. One potential solution is to develop technologies that provide efficient and safe means of in-hospital and in-home kinesthetic rehabilitation. In this regard, Haptics-enabled Interactive Robotic Neurorehabilitation (HIRN) systems have been developed. Existing Challenges: Although there are specific advantages with the use of HIRN technologies, there still exist several technical and control challenges, e.g., (a) absence of direct interactive physical interaction between therapists and patients; (b) questionable adaptability and flexibility considering the sensorimotor needs of patients; (c) limited accessibility in remote areas; and (d) guaranteeing patient-robot interaction safety while maximizing system transparency, especially when high control effort is needed for severely disabled patients, when the robot is to be used in a patient\u27s home or when the patient experiences involuntary movements. These challenges have provided the motivation for this research. Research Statement: In this project, a novel haptics-enabled telerobotic rehabilitation framework is designed, analyzed and implemented that can be used as a new paradigm for delivering motor therapy which gives therapists direct kinesthetic supervision over the robotic rehabilitation procedure. The system also allows for kinesthetic remote and ultimately in-home rehabilitation. To guarantee interaction safety while maximizing the performance of the system, a new framework for designing stabilizing controllers is developed initially based on small-gain theory and then completed using strong passivity theory. The proposed control framework takes into account knowledge about the variable biomechanical capabilities of the patient\u27s limb(s) in absorbing interaction forces and mechanical energy. The technique is generalized for use for classical rehabilitation robotic systems to realize patient-robot interaction safety while enhancing performance. In the next step, the proposed telerobotic system is studied as a modality of training for classical HIRN systems. The goal is to first model and then regenerate the prescribed kinesthetic supervision of an expert therapist. To broaden the population of patients who can use the technology and HIRN systems, a new control strategy is designed for patients experiencing involuntary movements. As the last step, the outcomes of the proposed theoretical and technological developments are translated to designing assistive mechatronic tools for patients with force and motion control deficits. This study shows that proper augmentation of haptic inputs can not only enhance the transparency and safety of robotic and telerobotic rehabilitation systems, but it can also assist patients with force and motion control deficiencies

    Computerized Evaluatution of Microsurgery Skills Training

    Get PDF
    The style of imparting medical training has evolved, over the years. The traditional methods of teaching and practicing basic surgical skills under apprenticeship model, no longer occupy the first place in modern technically demanding advanced surgical disciplines like neurosurgery. Furthermore, the legal and ethical concerns for patient safety as well as cost-effectiveness have forced neurosurgeons to master the necessary microsurgical techniques to accomplish desired results. This has lead to increased emphasis on assessment of clinical and surgical techniques of the neurosurgeons. However, the subjective assessment of microsurgical techniques like micro-suturing under the apprenticeship model cannot be completely unbiased. A few initiatives using computer-based techniques, have been made to introduce objective evaluation of surgical skills. This thesis presents a novel approach involving computerized evaluation of different components of micro-suturing techniques, to eliminate the bias of subjective assessment. The work involved acquisition of cine clips of micro-suturing activity on synthetic material. Image processing and computer vision based techniques were then applied to these videos to assess different characteristics of micro-suturing viz. speed, dexterity and effectualness. In parallel subjective grading on these was done by a senior neurosurgeon. Further correlation and comparative study of both the assessments was done to analyze the efficacy of objective and subjective evaluation

    Shared control for natural motion and safety in hands-on robotic surgery

    Get PDF
    Hands-on robotic surgery is where the surgeon controls the tool's motion by applying forces and torques to the robot holding the tool, allowing the robot-environment interaction to be felt though the tool itself. To further improve results, shared control strategies are used to combine the strengths of the surgeon with those of the robot. One such strategy is active constraints, which prevent motion into regions deemed unsafe or unnecessary. While research in active constraints on rigid anatomy has been well-established, limited work on dynamic active constraints (DACs) for deformable soft tissue has been performed, particularly on strategies which handle multiple sensing modalities. In addition, attaching the tool to the robot imposes the end effector dynamics onto the surgeon, reducing dexterity and increasing fatigue. Current control policies on these systems only compensate for gravity, ignoring other dynamic effects. This thesis presents several research contributions to shared control in hands-on robotic surgery, which create a more natural motion for the surgeon and expand the usage of DACs to point clouds. A novel null-space based optimization technique has been developed which minimizes the end effector friction, mass, and inertia of redundant robots, creating a more natural motion, one which is closer to the feeling of the tool unattached to the robot. By operating in the null-space, the surgeon is left in full control of the procedure. A novel DACs approach has also been developed, which operates on point clouds. This allows its application to various sensing technologies, such as 3D cameras or CT scans and, therefore, various surgeries. Experimental validation in point-to-point motion trials and a virtual reality ultrasound scenario demonstrate a reduction in work when maneuvering the tool and improvements in accuracy and speed when performing virtual ultrasound scans. Overall, the results suggest that these techniques could increase the ease of use for the surgeon and improve patient safety.Open Acces

    Master of Science

    Get PDF
    thesisAdmittance-type robotic devices are commonly used to complete tasks that require a high degree of precision and accuracy because they appear nonbackdrivable to many disturbances from the environment. Admittance-type robots are controlled using admittance control; a human interacts directly with a force sensor mounted to the robot, and the robot is computer-controlled to move in response to the applied force. The experiment herein was conducted to determine under which operating conditions human velocity control is optimized for admittance devices that are controlled under proportional-velocity control, and to determine the degradation in control under nonoptimal conditions. In this study, the desired velocity of the device was shown on a visual display. The desired velocity was shown with a scaling factor from the actual velocity of the device because the device often moved at velocities too slow to perceive visually. The admittance gain, ka, desired velocity, Vd, and the visualization scale factor, S were tuned to adjust the user's experience when interacting with an admittance device. We found that in velocity-tracking tasks, scaling the visual feedback only has a significant effect on performance for very slow desired velocities (0.1mm/s), for the range of velocities tested here. In this thesis, we give evidence that there exists a range of velocities and forces within which humans optimally interact with admittance-type devices. We found that the optimal range of velocities is between 0.4mm/s and 1.0mm/s, inclusive, and the optimal range of forces is between 0.4 N and 4.0 N, inclusive. To ensure optimal velocity-control performance, the admittance gain should be selected such that the desired velocity and target force remain within their respective optimal ranges simultaneously. We also found that on average subjects moved faster than the desired velocity when the desired velocity was 0.1 mm/s and subjects were slower than the desired velocity when it was higher than 0.4 mm/s. For each admittance gain there is a different threshold velocity at which velocity-control accuracy is optimal in the aggregate. If the device operates at a velocity that is faster or slower than the threshold velocity the operator will tend to lag or lead the desired velocity, respectively

    Robotically assisted eye surgery : a haptic master console

    Get PDF
    Vitreo-retinal surgery encompasses the surgical procedures performed on the vitreous humor and the retina. A procedure typically consists of the removal of the vitreous humor, the peeling of a membrane and/or the repair of a retinal detachment. Operations are performed with needle shaped instruments which enter the eye through surgeon made scleral openings. An instrument is moved by hand in four degrees of freedom (three rotations and one translation) through this opening. Two rotations (? and ? ) are for a lateral instrument tip movement. The other two DoFs (z and ?) are the translation and rotation along the instrument axis. Actuation of for example a forceps can be considered as a fifth DoF. Characteristically, the manipulation of delicate, micrometer range thick intraocular tissue is required. Today, eye surgery is performed with a maximum of two instruments simultaneously. The surgeon relies on visual feedback only, since instrument forces are below the human detection limit. A microscope provides the visual feedback. It forces the surgeon to work in a static and non ergonomic body posture. Although the surgeon’s proficiency improves throughout his career, hand tremor may become a problem around his mid-fifties. Robotically assisted surgery with a master-slave system enhances dexterity. The slave with instrument manipulators is placed over the eye. The surgeon controls the instrument manipulators via haptic interfaces at the master. The master and slave are connected by electronic hardware and control software. Implementation of tremor filtering in the control software and downscaling of the hand motion allow prolongation of the surgeon’s career. Furthermore, it becomes possible to do tasks like intraocular cannulation which can not be done by manually performed surgery. This thesis focusses on the master console. Eye surgery procedures are observed in the operating room of different hospitals to gain insight in the requirements for the master. The master console as designed has an adjustable frame, a 3D display and two haptic interfaces with a coarse adjustment arm each. The console is mounted at the head of the operating table and is combined with the slave. It is compact, easy to place and allows the surgeon to have a direct view on and a physical contact with the patient. Furthermore, it fits in today’s manual surgery arrangement. Each haptic interface has the same five degrees of freedom as the instrument inside the eye. Through these interfaces, the surgeon can feel the augmented instrument forces. Downscaling of the hand motion results in a more accurate instrument movement compared to manually performed surgery. Together with the visual feedback, it is like the surgeon grasps the instrument near the tip inside the eye. The similarity between hand motion and motion of the instrument tip as seen on the display results in an intuitive manipulation. Pre-adjustment of the interface is done via the coarse adjustment arm. Mode switching enables to control three or more instruments manipulators with only two interfaces. Two one degree of freedom master-slave systems with force feedback are built to derive the requirements for the haptic interface. Hardware in the loop testing provides valuable insights and shows the possibility of force feedback without the use of force sensors. Two five DoF haptic interfaces are realized for bimanual operation. Each DoF has a position encoder and a force feedback motor. A correct representation of the upscaled instrument forces is only possible if the disturbance forces are low. Actuators are therefore mounted to the fixed world or in the neighborhood of the pivoting point for a low contribution to the inertia. The use of direct drive for ' and and low geared, backdriveable transmissions for the other three DoFs gives a minimum of friction. Disturbance forces are further minimized by a proper cable layout and actuator-amplifier combinations without torque ripple. The similarity in DoFs between vitreo-retinal eye surgery and minimally invasive surgery (MIS) enables the system to be used for MIS as well. Experiments in combination with a slave robot for laparoscopic and thoracoscopic surgery show that an instrument can be manipulated in a comfortable and intuitive way. User experience of surgeons and others is utilized to improve the haptic interface further. A parallel instead of a serial actuation concept for the ' and DoFs reduces the inertia, eliminates the flexible cable connection between frame and motor and allows that the heat of the motor is transferred directly to the frame. A newly designed z-?? module combines the actuation and suspension of the hand held part of the interface and has a three times larger z range than in the first design of the haptic interface

    Vision-Based Autonomous Control in Robotic Surgery

    Get PDF
    Robotic Surgery has completely changed surgical procedures. Enhanced dexterity, ergonomics, motion scaling, and tremor filtering, are well-known advantages introduced with respect to classical laparoscopy. In the past decade, robotic plays a fundamental role in Minimally Invasive Surgery (MIS) in which the da Vinci robotic system (Intuitive Surgical Inc., Sunnyvale, CA) is the most widely used system for robot-assisted laparoscopic procedures. Robots also have great potentiality in Microsurgical applications, where human limits are crucial and surgical sub-millimetric gestures could have enormous benefits with motion scaling and tremor compensation. However, surgical robots still lack advanced assistive control methods that could notably support surgeon's activity and perform surgical tasks in autonomy for a high quality of intervention. In this scenario, images are the main feedback the surgeon can use to correctly operate in the surgical site. Therefore, in view of the increasing autonomy in surgical robotics, vision-based techniques play an important role and can arise by extending computer vision algorithms to surgical scenarios. Moreover, many surgical tasks could benefit from the application of advanced control techniques, allowing the surgeon to work under less stressful conditions and performing the surgical procedures with more accuracy and safety. The thesis starts from these topics, providing surgical robots the ability to perform complex tasks helping the surgeon to skillfully manipulate the robotic system to accomplish the above requirements. An increase in safety and a reduction in mental workload is achieved through the introduction of active constraints, that can prevent the surgical tool from crossing a forbidden region and similarly generate constrained motion to guide the surgeon on a specific path, or to accomplish robotic autonomous tasks. This leads to the development of a vision-based method for robot-aided dissection procedure allowing the control algorithm to autonomously adapt to environmental changes during the surgical intervention using stereo images elaboration. Computer vision is exploited to define a surgical tools collision avoidance method that uses Forbidden Region Virtual Fixtures by rendering a repulsive force to the surgeon. Advanced control techniques based on an optimization approach are developed, allowing multiple tasks execution with task definition encoded through Control Barrier Functions (CBFs) and enhancing haptic-guided teleoperation system during suturing procedures. The proposed methods are tested on a different robotic platform involving da Vinci Research Kit robot (dVRK) and a new microsurgical robotic platform. Finally, the integration of new sensors and instruments in surgical robots are considered, including a multi-functional tool for dexterous tissues manipulation and different visual sensing technologies

    A virtual hand assessment system for efficient outcome measures of hand rehabilitation

    Get PDF
    Previously held under moratorium from 1st December 2016 until 1st December 2021.Hand rehabilitation is an extremely complex and critical process in the medical rehabilitation field. This is mainly due to the high articulation of the hand functionality. Recent research has focused on employing new technologies, such as robotics and system control, in order to improve the precision and efficiency of the standard clinical methods used in hand rehabilitation. However, the designs of these devices were either oriented toward a particular hand injury or heavily dependent on subjective assessment techniques to evaluate the progress. These limitations reduce the efficiency of the hand rehabilitation devices by providing less effective results for restoring the lost functionalities of the dysfunctional hands. In this project, a novel technological solution and efficient hand assessment system is produced that can objectively measure the restoration outcome and, dynamically, evaluate its performance. The proposed system uses a data glove sensorial device to measure the multiple ranges of motion for the hand joints, and a Virtual Reality system to return an illustrative and safe visual assistance environment that can self-adjust with the subject’s performance. The system application implements an original finger performance measurement method for analysing the various hand functionalities. This is achieved by extracting the multiple features of the hand digits’ motions; such as speed, consistency of finger movements and stability during the hold positions. Furthermore, an advanced data glove calibration method was developed and implemented in order to accurately manipulate the virtual hand model and calculate the hand kinematic movements in compliance with the biomechanical structure of the hand. The experimental studies were performed on a controlled group of 10 healthy subjects (25 to 42 years age). The results showed intra-subject reliability between the trials (average of crosscorrelation ρ = 0.7), inter-subject repeatability across the subject’s performance (p > 0.01 for the session with real objects and with few departures in some of the virtual reality sessions). In addition, the finger performance values were found to be very efficient in detecting the multiple elements of the fingers’ performance including the load effect on the forearm. Moreover, the electromyography measurements, in the virtual reality sessions, showed high sensitivity in detecting the tremor effect (the mean power frequency difference on the right Vextensor digitorum muscle is 176 Hz). Also, the finger performance values for the virtual reality sessions have the same average distance as the real life sessions (RSQ =0.07). The system, besides offering an efficient and quantitative evaluation of hand performance, it was proven compatible with different hand rehabilitation techniques where it can outline the primarily affected parts in the hand dysfunction. It also can be easily adjusted to comply with the subject’s specifications and clinical hand assessment procedures to autonomously detect the classification task events and analyse them with high reliability. The developed system is also adaptable with different disciplines’ involvements, other than the hand rehabilitation, such as ergonomic studies, hand robot control, brain-computer interface and various fields involving hand control.Hand rehabilitation is an extremely complex and critical process in the medical rehabilitation field. This is mainly due to the high articulation of the hand functionality. Recent research has focused on employing new technologies, such as robotics and system control, in order to improve the precision and efficiency of the standard clinical methods used in hand rehabilitation. However, the designs of these devices were either oriented toward a particular hand injury or heavily dependent on subjective assessment techniques to evaluate the progress. These limitations reduce the efficiency of the hand rehabilitation devices by providing less effective results for restoring the lost functionalities of the dysfunctional hands. In this project, a novel technological solution and efficient hand assessment system is produced that can objectively measure the restoration outcome and, dynamically, evaluate its performance. The proposed system uses a data glove sensorial device to measure the multiple ranges of motion for the hand joints, and a Virtual Reality system to return an illustrative and safe visual assistance environment that can self-adjust with the subject’s performance. The system application implements an original finger performance measurement method for analysing the various hand functionalities. This is achieved by extracting the multiple features of the hand digits’ motions; such as speed, consistency of finger movements and stability during the hold positions. Furthermore, an advanced data glove calibration method was developed and implemented in order to accurately manipulate the virtual hand model and calculate the hand kinematic movements in compliance with the biomechanical structure of the hand. The experimental studies were performed on a controlled group of 10 healthy subjects (25 to 42 years age). The results showed intra-subject reliability between the trials (average of crosscorrelation ρ = 0.7), inter-subject repeatability across the subject’s performance (p > 0.01 for the session with real objects and with few departures in some of the virtual reality sessions). In addition, the finger performance values were found to be very efficient in detecting the multiple elements of the fingers’ performance including the load effect on the forearm. Moreover, the electromyography measurements, in the virtual reality sessions, showed high sensitivity in detecting the tremor effect (the mean power frequency difference on the right Vextensor digitorum muscle is 176 Hz). Also, the finger performance values for the virtual reality sessions have the same average distance as the real life sessions (RSQ =0.07). The system, besides offering an efficient and quantitative evaluation of hand performance, it was proven compatible with different hand rehabilitation techniques where it can outline the primarily affected parts in the hand dysfunction. It also can be easily adjusted to comply with the subject’s specifications and clinical hand assessment procedures to autonomously detect the classification task events and analyse them with high reliability. The developed system is also adaptable with different disciplines’ involvements, other than the hand rehabilitation, such as ergonomic studies, hand robot control, brain-computer interface and various fields involving hand control
    • 

    corecore