855 research outputs found

    Collaborative Robotics Strategies for Handling Non-Repetitive Micro-Drilling Tasks Characterized by Low Structural Mechanical Impedance

    Get PDF
    Mechanical micro-drilling finds widespread use in diverse applications ranging from advanced manufacturing to medical surgery. This dissertation aims to develop techniques that allow programming of robots to perform effective micro-drilling tasks. Accomplishing this goal is faced with several challenges. Micro-drills suffer from frequent breakage caused from variations in drill process parameters. Micro-drilling tasks afford extremely low feed rates and almost zero tolerance for any feed rate variations. The accompanying robot programming task is made difficult as mathematical models that capture the micro-drilling process complexities and sensitive variations in micro-drill parameters are highly difficult to obtain. Therefore, an experimental approach is adopted to identify the feasible parameter space by carrying out a systematic characterization of the tool-specimen interaction that is crucial for understanding the robotic micro-drilling process. The diameter of the hole to be drilled on a material is a primary defining factor for micro-drilling. For the purposes of this dissertation, micro-drills are defined as having a diameter less than or equal to 1 mm. The Sawyer and KUKA collaborative robots that meet the sensitive speed requirements have been chosen for this study. A regression analysis revealed a relationship between feed rate and reaction forces involved in the micro-drilling process that matched the underlying mathematical model of the tool-specimen interactions. Subsequently, this dissertation addresses the problem of destabilization in robotic micro-drilling caused by the low impedance of the collaborative robot’s cantilever structure. A semi-robotic method that combines force-controlled adaptive drill feed rate and human-assisted impedance enhancement strategy is developed to address the destabilization problem. This approach is inspired by the capability of humans to stabilize unstable dynamics while performing contact-based tasks by using selective control of arm mechanical impedance. A human-robot collaborative kinesthetic drilling mode was also developed using the selective compliance capability of the KUKA robot. Experimental results show that the Sawyer and KUKA robots can use the developed strategies to drill micro-holes of diameters up to a minimum of 0.6 mm and 0.2 mm, respectively. Finally, experiments involving drilling in different materials reveal the potential application of the collaborative robotic micro-drilling approach in composite repairs, micro-channels, dental drilling, and bone drilling

    Robotic Assistant Systems for Otolaryngology-Head and Neck Surgery

    Get PDF
    Recently, there has been a significant movement in otolaryngology-head and neck surgery (OHNS) toward minimally invasive techniques, particularly those utilizing natural orifices. However, while these techniques can reduce the risk of complications encountered with classic open approaches such as scarring, infection, and damage to healthy tissue in order to access the surgical site, there remain significant challenges in both visualization and manipulation, including poor sensory feedback, reduced visibility, limited working area, and decreased precision due to long instruments. This work presents two robotic assistance systems which help to overcome different aspects of these challenges. The first is the Robotic Endo-Laryngeal Flexible (Robo-ELF) Scope, which assists surgeons in manipulating flexible endoscopes. Flexible endoscopes can provide superior visualization compared to microscopes or rigid endoscopes by allowing views not constrained by line-of-sight. However, they are seldom used in the operating room due to the difficulty in precisely manually manipulating and stabilizing them for long periods of time. The Robo-ELF Scope enables stable, precise robotic manipulation for flexible scopes and frees the surgeon’s hands to operate bimanually. The Robo-ELF Scope has been demonstrated and evaluated in human cadavers and is moving toward a human subjects study. The second is the Robotic Ear Nose and Throat Microsurgery System (REMS), which assists surgeons in manipulating rigid instruments and endoscopes. There are two main types of challenges involved in manipulating rigid instruments: reduced precision from hand tremor amplified by long instruments, and difficulty navigating through complex anatomy surrounded by sensitive structures. The REMS enables precise manipulation by allowing the surgeon to hold the surgical instrument while filtering unwanted movement such as hand tremor. The REMS also enables augmented navigation by calculating the position of the instrument with high accuracy, and combining this information with registered preoperative imaging data to enforce virtual safety barriers around sensitive anatomy. The REMS has been demonstrated and evaluated in user studies with synthetic phantoms and human cadavers

    Design and Control of Robotic Systems for Lower Limb Stroke Rehabilitation

    Get PDF
    Lower extremity stroke rehabilitation exhausts considerable health care resources, is labor intensive, and provides mostly qualitative metrics of patient recovery. To overcome these issues, robots can assist patients in physically manipulating their affected limb and measure the output motion. The robots that have been currently designed, however, provide assistance over a limited set of training motions, are not portable for in-home and in-clinic use, have high cost and may not provide sufficient safety or performance. This thesis proposes the idea of incorporating a mobile drive base into lower extremity rehabilitation robots to create a portable, inherently safe system that provides assistance over a wide range of training motions. A set of rehabilitative motion tasks were established and a six-degree-of-freedom (DOF) motion and force-sensing system was designed to meet high-power, large workspace, and affordability requirements. An admittance controller was implemented, and the feasibility of using this portable, low-cost system for movement assistance was shown through tests on a healthy individual. An improved version of the robot was then developed that added torque sensing and known joint elasticity for use in future clinical testing with a flexible-joint impedance controller

    Shared control for natural motion and safety in hands-on robotic surgery

    Get PDF
    Hands-on robotic surgery is where the surgeon controls the tool's motion by applying forces and torques to the robot holding the tool, allowing the robot-environment interaction to be felt though the tool itself. To further improve results, shared control strategies are used to combine the strengths of the surgeon with those of the robot. One such strategy is active constraints, which prevent motion into regions deemed unsafe or unnecessary. While research in active constraints on rigid anatomy has been well-established, limited work on dynamic active constraints (DACs) for deformable soft tissue has been performed, particularly on strategies which handle multiple sensing modalities. In addition, attaching the tool to the robot imposes the end effector dynamics onto the surgeon, reducing dexterity and increasing fatigue. Current control policies on these systems only compensate for gravity, ignoring other dynamic effects. This thesis presents several research contributions to shared control in hands-on robotic surgery, which create a more natural motion for the surgeon and expand the usage of DACs to point clouds. A novel null-space based optimization technique has been developed which minimizes the end effector friction, mass, and inertia of redundant robots, creating a more natural motion, one which is closer to the feeling of the tool unattached to the robot. By operating in the null-space, the surgeon is left in full control of the procedure. A novel DACs approach has also been developed, which operates on point clouds. This allows its application to various sensing technologies, such as 3D cameras or CT scans and, therefore, various surgeries. Experimental validation in point-to-point motion trials and a virtual reality ultrasound scenario demonstrate a reduction in work when maneuvering the tool and improvements in accuracy and speed when performing virtual ultrasound scans. Overall, the results suggest that these techniques could increase the ease of use for the surgeon and improve patient safety.Open Acces

    Physical Diagnosis and Rehabilitation Technologies

    Get PDF
    The book focuses on the diagnosis, evaluation, and assistance of gait disorders; all the papers have been contributed by research groups related to assistive robotics, instrumentations, and augmentative devices

    Intuitive, iterative and assisted virtual guides programming for human-robot comanipulation

    Get PDF
    Pendant très longtemps, l'automatisation a été assujettie à l'usage de robots industriels traditionnels placés dans des cages et programmés pour répéter des tâches plus ou moins complexes au maximum de leur vitesse et de leur précision. Cette automatisation, dite rigide, possède deux inconvénients majeurs : elle est chronophage dû aux contraintes contextuelles applicatives et proscrit la présence humaine. Il existe désormais une nouvelle génération de robots avec des systèmes moins encombrants, peu coûteux et plus flexibles. De par leur structure et leurs modes de fonctionnement ils sont intrinsèquement sûrs ce qui leurs permettent de travailler main dans la main avec les humains. Dans ces nouveaux espaces de travail collaboratifs, l'homme peut être inclus dans la boucle comme un agent décisionnel actif. En tant qu'instructeur ou collaborateur il peut influencer le processus décisionnel du robot : on parle de robots collaboratifs (ou cobots). Dans ce nouveau contexte, nous faisons usage de guides virtuels. Ils permettent aux cobots de soulager les efforts physiques et la charge cognitive des opérateurs. Cependant, la définition d'un guide virtuel nécessite souvent une expertise et une modélisation précise de la tâche. Cela restreint leur utilité aux scénarios à contraintes fixes. Pour palier ce problème et améliorer la flexibilité de la programmation du guide virtuel, cette thèse présente une nouvelle approche par démonstration : nous faisons usage de l'apprentissage kinesthésique de façon itérative et construisons le guide virtuel avec une spline 6D. Grâce à cette approche, l'opérateur peut modifier itérativement les guides tout en gardant leur assistance. Cela permet de rendre le processus plus intuitif et naturel ainsi que de réduire la pénibilité. La modification locale d'un guide virtuel en trajectoire est possible par interaction physique avec le robot. L'utilisateur peut déplacer un point clé cartésien ou modifier une portion entière du guide avec une nouvelle démonstration partielle. Nous avons également étendu notre approche aux guides virtuels 6D, où les splines en déplacement sont définies via une interpolation Akima (pour la translation) et une 'interpolation quadratique des quaternions (pour l'orientation). L'opérateur peut initialement définir un guide virtuel en trajectoire, puis utiliser l'assistance en translation pour ne se concentrer que sur la démonstration de l'orientation. Nous avons appliqué notre approche dans deux scénarios industriels utilisant un cobot. Nous avons ainsi démontré l'intérêt de notre méthode qui améliore le confort de l'opérateur lors de la comanipulation.For a very long time, automation was driven by the use of traditional industrial robots placed in cages, programmed to repeat more or less complex tasks at their highest speed and with maximum accuracy. This robot-oriented solution is heavily dependent on hard automation which requires pre-specified fixtures and time consuming programming, hindering robots from becoming flexible and versatile tools. These robots have evolved towards a new generation of small, inexpensive, inherently safe and flexible systems that work hand in hand with humans. In these new collaborative workspaces the human can be included in the loop as an active agent. As a teacher and as a co-worker he can influence the decision-making process of the robot. In this context, virtual guides are an important tool used to assist the human worker by reducing physical effort and cognitive overload during tasks accomplishment. However, the construction of virtual guides often requires expert knowledge and modeling of the task. These limitations restrict the usefulness of virtual guides to scenarios with unchanging constraints. To overcome these challenges and enhance the flexibility of virtual guides programming, this thesis presents a novel approach that allows the worker to create virtual guides by demonstration through an iterative method based on kinesthetic teaching and displacement splines. Thanks to this approach, the worker is able to iteratively modify the guides while being assisted by them, making the process more intuitive and natural while reducing its painfulness. Our approach allows local refinement of virtual guiding trajectories through physical interaction with the robots. We can modify a specific cartesian keypoint of the guide or re- demonstrate a portion. We also extended our approach to 6D virtual guides, where displacement splines are defined via Akima interpolation (for translation) and quadratic interpolation of quaternions (for orientation). The worker can initially define a virtual guiding trajectory and then use the assistance in translation to only concentrate on defining the orientation along the path. We demonstrated that these innovations provide a novel and intuitive solution to increase the human's comfort during human-robot comanipulation in two industrial scenarios with a collaborative robot (cobot)

    Vision-Based Autonomous Control in Robotic Surgery

    Get PDF
    Robotic Surgery has completely changed surgical procedures. Enhanced dexterity, ergonomics, motion scaling, and tremor filtering, are well-known advantages introduced with respect to classical laparoscopy. In the past decade, robotic plays a fundamental role in Minimally Invasive Surgery (MIS) in which the da Vinci robotic system (Intuitive Surgical Inc., Sunnyvale, CA) is the most widely used system for robot-assisted laparoscopic procedures. Robots also have great potentiality in Microsurgical applications, where human limits are crucial and surgical sub-millimetric gestures could have enormous benefits with motion scaling and tremor compensation. However, surgical robots still lack advanced assistive control methods that could notably support surgeon's activity and perform surgical tasks in autonomy for a high quality of intervention. In this scenario, images are the main feedback the surgeon can use to correctly operate in the surgical site. Therefore, in view of the increasing autonomy in surgical robotics, vision-based techniques play an important role and can arise by extending computer vision algorithms to surgical scenarios. Moreover, many surgical tasks could benefit from the application of advanced control techniques, allowing the surgeon to work under less stressful conditions and performing the surgical procedures with more accuracy and safety. The thesis starts from these topics, providing surgical robots the ability to perform complex tasks helping the surgeon to skillfully manipulate the robotic system to accomplish the above requirements. An increase in safety and a reduction in mental workload is achieved through the introduction of active constraints, that can prevent the surgical tool from crossing a forbidden region and similarly generate constrained motion to guide the surgeon on a specific path, or to accomplish robotic autonomous tasks. This leads to the development of a vision-based method for robot-aided dissection procedure allowing the control algorithm to autonomously adapt to environmental changes during the surgical intervention using stereo images elaboration. Computer vision is exploited to define a surgical tools collision avoidance method that uses Forbidden Region Virtual Fixtures by rendering a repulsive force to the surgeon. Advanced control techniques based on an optimization approach are developed, allowing multiple tasks execution with task definition encoded through Control Barrier Functions (CBFs) and enhancing haptic-guided teleoperation system during suturing procedures. The proposed methods are tested on a different robotic platform involving da Vinci Research Kit robot (dVRK) and a new microsurgical robotic platform. Finally, the integration of new sensors and instruments in surgical robots are considered, including a multi-functional tool for dexterous tissues manipulation and different visual sensing technologies

    AUGMENTED REALITY AND INTRAOPERATIVE C-ARM CONE-BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED ROBOTIC SURGERY

    Get PDF
    Minimally-invasive robotic-assisted surgery is a rapidly-growing alternative to traditionally open and laparoscopic procedures; nevertheless, challenges remain. Standard of care derives surgical strategies from preoperative volumetric data (i.e., computed tomography (CT) and magnetic resonance (MR) images) that benefit from the ability of multiple modalities to delineate different anatomical boundaries. However, preoperative images may not reflect a possibly highly deformed perioperative setup or intraoperative deformation. Additionally, in current clinical practice, the correspondence of preoperative plans to the surgical scene is conducted as a mental exercise; thus, the accuracy of this practice is highly dependent on the surgeon’s experience and therefore subject to inconsistencies. In order to address these fundamental limitations in minimally-invasive robotic surgery, this dissertation combines a high-end robotic C-arm imaging system and a modern robotic surgical platform as an integrated intraoperative image-guided system. We performed deformable registration of preoperative plans to a perioperative cone-beam computed tomography (CBCT), acquired after the patient is positioned for intervention. From the registered surgical plans, we overlaid critical information onto the primary intraoperative visual source, the robotic endoscope, by using augmented reality. Guidance afforded by this system not only uses augmented reality to fuse virtual medical information, but also provides tool localization and other dynamic intraoperative updated behavior in order to present enhanced depth feedback and information to the surgeon. These techniques in guided robotic surgery required a streamlined approach to creating intuitive and effective human-machine interferences, especially in visualization. Our software design principles create an inherently information-driven modular architecture incorporating robotics and intraoperative imaging through augmented reality. The system's performance is evaluated using phantoms and preclinical in-vivo experiments for multiple applications, including transoral robotic surgery, robot-assisted thoracic interventions, and cocheostomy for cochlear implantation. The resulting functionality, proposed architecture, and implemented methodologies can be further generalized to other C-arm-based image guidance for additional extensions in robotic surgery
    • …
    corecore