229 research outputs found

    Evaluation of haptic guidance virtual fixtures and 3D visualization methods in telemanipulation—a user study

    Get PDF
    © 2019, The Author(s). This work presents a user-study evaluation of various visual and haptic feedback modes on a real telemanipulation platform. Of particular interest is the potential for haptic guidance virtual fixtures and 3D-mapping techniques to enhance efficiency and awareness in a simple teleoperated valve turn task. An RGB-Depth camera is used to gather real-time color and geometric data of the remote scene, and the operator is presented with either a monocular color video stream, a 3D-mapping voxel representation of the remote scene, or the ability to place a haptic guidance virtual fixture to help complete the telemanipulation task. The efficacy of the feedback modes is then explored experimentally through a user study, and the different modes are compared on the basis of objective and subjective metrics. Despite the simplistic task and numerous evaluation metrics, results show that the haptic virtual fixture resulted in significantly better collision avoidance compared to 3D visualization alone. Anticipated performance enhancements were also observed moving from 2D to 3D visualization. Remaining comparisons lead to exploratory inferences that inform future direction for focused and statistically significant studies

    Monitoring companion for industrial robotic processes

    Get PDF
    For system integrators, optimizing complex industrial robotic applications (e.g. robotised welding) is a difficult and time-consuming task. This procedure is rendered tedious and often very hard to achieve when the operator cannot access the robotic system once in operation, perhaps because the installation is far away or because of the operational environment. In these circumstances, as an alternative to physically visiting the installation site, the system integrator may rely on additional nearby sensors to remotely acquire the necessary process information. While it is hard to completely replace this trial and error approach, it is possible to provide a way to gather process information more effectively that can be used in several robotic installations.This thesis investigates the use of a "monitoring robot" in addition to the task robot(s) that belong to the industrial process to be optimized. The monitoring robot can be equipped with several different sensors and can be moved into close proximity of any installed task robot so that it can be used to collect information from that process during and/or after the operation without interfering. The thesis reviews related work in the industry and in the field of teleoperation to identify the most important challenges in remote monitoring and teleoperation. From the background investigation it is clear that two very important issues are: i) the nature of the teleoperator’s interface and; ii) the efficiency of the shared control between the human operator and the monitoring system. In order to investigate these two issues efficiently it was necessary to create experimental scenarios that operate independently from any application scenario, so an abstract problem domain is created. This way the monitoring system's control and interface can be evaluated in a context that presents challenges that are typical of a remote monitoring task but are not application domain specific. Therefore the validity of the proposed approach can be assessed from a generic and, therefore, more powerful and widely applicable perspective. The monitoring framework developed in this thesis is described, both in the shared control design choices based on virtual fixtures (VF) and the implementation in a 3D visualization environment. The monitoring system developed is evaluated with a usability study with user participants. The usability study aims at assessing the system's performance along with its acceptance and ease of use in a static monitoring task, accompanied by user\hyp{}filled TLX questionnaires. Since future work will apply this system in real robotic welding scenarios, this thesis finally reports some preliminary work in such an application

    CO-ROBOTIC ULTRASOUND IMAGING: A COOPERATIVE FORCE CONTROL APPROACH

    Get PDF
    Ultrasound (US) imaging remains one of the most commonly used imaging modalities in medical practice due to its low cost and safety. However, 63-91% of ultrasonographers develop musculoskeletal disorders due to the effort required to perform imaging tasks. Robotic ultrasound (RUS), the application of robotic systems to assist ultrasonographers in ultrasound scanning procedures, has been proposed in literature and recently deployed in clinical settings using limited degree-of-freedom (DOF) systems. An example of this includes breast-scanning systems, which allow one-DOF translation of a large ultrasound array in order to capture patients’ breast scans and minimize sonographer effort while preserving a desired clinical outcome. Recently, the robotic industry has evolved to provide light-weight, compact, accurate, and cost-effective manipulators. We leverage this new reality in able to provide ultrasonographers with a full 6-DOF system that provides force assistance to facilitate US image acquisition. Admittance robot control allows for smooth human-machine interaction in a desired task. In the case of RUS, force control is capable of assisting sonographers in facilitating and even improving the imaging results of typical procedures. We propose a new system setup for collaborative force control in US applications. This setup consists of the 6-DOF UR5 industrial robot, and a 6-axes force sensor attached to the robot tooltip, which in turn has an US probe attached to it through a custom-designed probe attachment mechanism. Additionally, an independent one-axis load cell is placed inside this attachment device and used to measure the contact force between the probe and the patient’s anatomy in real time and independent of any other forces. As the sonographer guides the US probe, the robot collaborates with the hand motions, following the path of the user. When imaging, the robot can offer assistance to the sonographer by augmenting the forces applied by him or her, thereby lessening the physical effort required as well as the resulting strain. Additional benefits include force and velocity limiting for patient safety and robot motion constraints for particular imaging tasks. Initial results of a conducted user study show the feasibility of implementing the presented robot-assisted system in a clinical setting

    Multi-robot cooperative platform : a task-oriented teleoperation paradigm

    Get PDF
    This thesis proposes the study and development of a teleoperation system based on multi-robot cooperation under the task oriented teleoperation paradigm: Multi-Robot Cooperative Paradigm, MRCP. In standard teleoperation, the operator uses the master devices to control the remote slave robot arms. These arms reproduce the desired movements and perform the task. With the developed work, the operator can virtually manipulate an object. MRCP automatically generates the arms orders to perform the task. The operator does not have to solve situations arising from possible restrictions that the slave arms may have. The research carried out is therefore aimed at improving the accuracy teleoperation tasks in complex environments, particularly in the field of robot assisted minimally invasive surgery. This field requires patient safety and the workspace entails many restrictions to teleoperation. MRCP can be defined as a platform composed of several robots that cooperate automatically to perform a teleoperated task, creating a robotic system with increased capacity (workspace volume, accessibility, dexterity ...). The cooperation is based on transferring the task between robots when necessary to enable a smooth task execution. The MRCP control evaluates the suitability of each robot to continue with the ongoing task and the optimal time to execute a task transfer between the current selected robot and the best candidate to continue with the task. From the operator¿s point of view, MRCP provides an interface that enables the teleoperation though the task-oriented paradigm: operator orders are translated into task actions instead of robot orders. This thesis is structured as follows: The first part is dedicated to review the current solutions in the teleoperation of complex tasks and compare them with those proposed in this research. The second part of the thesis presents and reviews in depth the different evaluation criteria to determine the suitability of each robot to continue with the execution of a task, considering the configuration of the robots and emphasizing the criterion of dexterity and manipulability. The study reviews the different required control algorithms to enable the task oriented telemanipulation. This proposed teleoperation paradigm is transparent to the operator. Then, the Thesis presents and analyses several experimental results using MRCP in the field of minimally invasive surgery. These experiments study the effectiveness of MRCP in various tasks requiring the cooperation of two hands. A type task is used: a suture using minimally invasive surgery technique. The analysis is done in terms of execution time, economy of movement, quality and patient safety (potential damage produced by undesired interaction between the tools and the vital tissues of the patient). The final part of the thesis proposes the implementation of different virtual aids and restrictions (guided teleoperation based on haptic visual and audio feedback, protection of restricted workspace regions, etc.) using the task oriented teleoperation paradigm. A framework is defined for implementing and applying a basic set of virtual aids and constraints within the framework of a virtual simulator for laparoscopic abdominal surgery. The set of experiments have allowed to validate the developed work. The study revealed the influence of virtual aids in the learning process of laparoscopic techniques. It has also demonstrated the improvement of learning curves, which paves the way for its implementation as a methodology for training new surgeons.Aquesta tesi doctoral proposa l'estudi i desenvolupament d'un sistema de teleoperació basat en la cooperació multi-robot sota el paradigma de la teleoperació orientada a tasca: Multi-Robot Cooperative Paradigm, MRCP. En la teleoperació clàssica, l'operador utilitza els telecomandaments perquè els braços robots reprodueixin els seus moviments i es realitzi la tasca desitjada. Amb el treball realitzat, l'operador pot manipular virtualment un objecte i és mitjançant el MRCP que s'adjudica a cada braç les ordres necessàries per realitzar la tasca, sense que l'operador hagi de resoldre les situacions derivades de possibles restriccions que puguin tenir els braços executors. La recerca desenvolupada està doncs orientada a millorar la teleoperació en tasques de precisió en entorns complexos i, en particular, en el camp de la cirurgia mínimament invasiva assistida per robots. Aquest camp imposa condicions de seguretat del pacient i l'espai de treball comporta moltes restriccions a la teleoperació. MRCP es pot definir com a una plataforma formada per diversos robots que cooperen de forma automàtica per dur a terme una tasca teleoperada, generant un sistema robòtic amb capacitats augmentades (volums de treball, accessibilitat, destresa,...). La cooperació es basa en transferir la tasca entre robots a partir de determinar quin és aquell que és més adequat per continuar amb la seva execució i el moment òptim per realitzar la transferència de la tasca entre el robot actiu i el millor candidat a continuar-la. Des del punt de vista de l'operari, MRCP ofereix una interfície de teleoperació que permet la realització de la teleoperació mitjançant el paradigma d'ordres orientades a la tasca: les ordres es tradueixen en accions sobre la tasca en comptes d'estar dirigides als robots. Aquesta tesi està estructurada de la següent manera: Primerament es fa una revisió de l'estat actual de les diverses solucions desenvolupades actualment en el camp de la teleoperació de tasques complexes, comparant-les amb les proposades en aquest treball de recerca. En el segon bloc de la tesi es presenten i s'analitzen a fons els diversos criteris per determinar la capacitat de cada robot per continuar l'execució d'una tasca, segons la configuració del conjunt de robots i fent especial èmfasi en el criteri de destresa i manipulabilitat. Seguint aquest estudi, es presenten els diferents processos de control emprats per tal d'assolir la telemanipulació orientada a tasca de forma transparent a l'operari. Seguidament es presenten diversos resultats experimentals aplicant MRCP al camp de la cirurgia mínimament invasiva. En aquests experiments s'estudia l'eficàcia de MRCP en diverses tasques que requereixen de la cooperació de dues mans. S'ha escollit una tasca tipus: sutura amb tècnica de cirurgia mínimament invasiva. L'anàlisi es fa en termes de temps d'execució, economia de moviment, qualitat i seguretat del pacient (potencials danys causats per la interacció no desitjada entre les eines i els teixits vitals del pacient). Finalment s'ha estudiat l'ús de diferents ajudes i restriccions virtuals (guiat de la teleoperació via retorn hàptic, visual o auditiu, protecció de regions de l'espai de treball, etc) dins el paradigma de teleoperació orientada a tasca. S'ha definint un marc d'aplicació base i implementant un conjunt de restriccions virtuals dins el marc d'un simulador de cirurgia laparoscòpia abdominal. El conjunt d'experiments realitzats han permès validar el treball realitzat. Aquest estudi ha permès determinar la influencia de les ajudes virtuals en el procés d'aprenentatge de les tècniques laparoscòpiques. S'ha evidenciat una millora en les corbes d'aprenentatge i obre el camí a la seva implantació com a metodologia d'entrenament de nous cirurgians.Postprint (published version

    Robotic Assistant Systems for Otolaryngology-Head and Neck Surgery

    Get PDF
    Recently, there has been a significant movement in otolaryngology-head and neck surgery (OHNS) toward minimally invasive techniques, particularly those utilizing natural orifices. However, while these techniques can reduce the risk of complications encountered with classic open approaches such as scarring, infection, and damage to healthy tissue in order to access the surgical site, there remain significant challenges in both visualization and manipulation, including poor sensory feedback, reduced visibility, limited working area, and decreased precision due to long instruments. This work presents two robotic assistance systems which help to overcome different aspects of these challenges. The first is the Robotic Endo-Laryngeal Flexible (Robo-ELF) Scope, which assists surgeons in manipulating flexible endoscopes. Flexible endoscopes can provide superior visualization compared to microscopes or rigid endoscopes by allowing views not constrained by line-of-sight. However, they are seldom used in the operating room due to the difficulty in precisely manually manipulating and stabilizing them for long periods of time. The Robo-ELF Scope enables stable, precise robotic manipulation for flexible scopes and frees the surgeon’s hands to operate bimanually. The Robo-ELF Scope has been demonstrated and evaluated in human cadavers and is moving toward a human subjects study. The second is the Robotic Ear Nose and Throat Microsurgery System (REMS), which assists surgeons in manipulating rigid instruments and endoscopes. There are two main types of challenges involved in manipulating rigid instruments: reduced precision from hand tremor amplified by long instruments, and difficulty navigating through complex anatomy surrounded by sensitive structures. The REMS enables precise manipulation by allowing the surgeon to hold the surgical instrument while filtering unwanted movement such as hand tremor. The REMS also enables augmented navigation by calculating the position of the instrument with high accuracy, and combining this information with registered preoperative imaging data to enforce virtual safety barriers around sensitive anatomy. The REMS has been demonstrated and evaluated in user studies with synthetic phantoms and human cadavers

    ReachMAN to help sub-acute patients training reaching and manipulation

    No full text
    Conventional rehabilitation after stroke, consisting in one-to-one practice with the therapist, is labor-intensive and subjective. Furthermore, there is evidence that increasing training would benefit the motor function of stroke survivors, though the available resources do not allow it. Training with dedicated robotic devices promises to address these problems and to promote motivation through therapeutic games. The goal of this project is to develop a simple robotic system to assist rehabilitation that could easily be integrated in existing hospital environments and rehabilitation centers. A study was first carried out to analyze the kinematics of hand movements while performing representative activities of daily living. Results showed that movements were confined to one plane so can be trained using a robot with less degrees-of-freedom (DOF). Hence ReachMAN, a compact 3 DOF robot based on an endpoint based approach, was developed to train reaching, forearm pronosupination and grasping, independently or simultaneously. ReachMAN's exercises were developed using games based on software thereby facilitating active participation from patients. Visual, haptic and performance feedback were provided to increase motivation. Tuneable levels of difficulty were provided to suit patient's ability. A pilot study with three subjects was first conducted to evaluate the potential use of ReachMAN as a rehabilitation tool and to determine suitable settings for training. Following positive results from a pilot study, a clinical study was initiated to investigate the effect of rehabilitation using ReachMAN. Preliminary results of 6 subjects show an increase in patients upper limb motor activity, range of movements, smoothness and reduction in movement duration. Subjects reported to be motivated with the robot training and felt that the robot helped in their recovery. The results of this thesis suggest that a compact and simple robot such as ReachMAN can be used to enhance recovery in sub-acute stroke patients

    Survey on Additive Manufacturing, Cloud 3D Printing and Services

    Full text link
    Cloud Manufacturing (CM) is the concept of using manufacturing resources in a service oriented way over the Internet. Recent developments in Additive Manufacturing (AM) are making it possible to utilise resources ad-hoc as replacement for traditional manufacturing resources in case of spontaneous problems in the established manufacturing processes. In order to be of use in these scenarios the AM resources must adhere to a strict principle of transparency and service composition in adherence to the Cloud Computing (CC) paradigm. With this review we provide an overview over CM, AM and relevant domains as well as present the historical development of scientific research in these fields, starting from 2002. Part of this work is also a meta-review on the domain to further detail its development and structure

    Robotic manipulators for single access surgery

    Get PDF
    This thesis explores the development of cooperative robotic manipulators for enhancing surgical precision and patient outcomes in single-access surgery and, specifically, Transanal Endoscopic Microsurgery (TEM). During these procedures, surgeons manipulate a heavy set of instruments via a mechanical clamp inserted in the patient’s body through a surgical port, resulting in imprecise movements, increased patient risks, and increased operating time. Therefore, an articulated robotic manipulator with passive joints is initially introduced, featuring built-in position and force sensors in each joint and electronic joint brakes for instant lock/release capability. The articulated manipulator concept is further improved with motorised joints, evolving into an active tool holder. The joints allow the incorporation of advanced robotic capabilities such as ultra-lightweight gravity compensation and hands-on kinematic reconfiguration, which can optimise the placement of the tool holder in the operating theatre. Due to the enhanced sensing capabilities, the application of the active robotic manipulator was further explored in conjunction with advanced image guidance approaches such as endomicroscopy. Recent advances in probe-based optical imaging such as confocal endomicroscopy is making inroads in clinical uses. However, the challenging manipulation of imaging probes hinders their practical adoption. Therefore, a combination of the fully cooperative robotic manipulator with a high-speed scanning endomicroscopy instrument is presented, simplifying the incorporation of optical biopsy techniques in routine surgical workflows. Finally, another embodiment of a cooperative robotic manipulator is presented as an input interface to control a highly-articulated robotic instrument for TEM. This master-slave interface alleviates the drawbacks of traditional master-slave devices, e.g., using clutching mechanics to compensate for the mismatch between slave and master workspaces, and the lack of intuitive manipulation feedback, e.g. joint limits, to the user. To address those drawbacks a joint-space robotic manipulator is proposed emulating the kinematic structure of the flexible robotic instrument under control.Open Acces

    Intuitive, iterative and assisted virtual guides programming for human-robot comanipulation

    Get PDF
    Pendant très longtemps, l'automatisation a été assujettie à l'usage de robots industriels traditionnels placés dans des cages et programmés pour répéter des tâches plus ou moins complexes au maximum de leur vitesse et de leur précision. Cette automatisation, dite rigide, possède deux inconvénients majeurs : elle est chronophage dû aux contraintes contextuelles applicatives et proscrit la présence humaine. Il existe désormais une nouvelle génération de robots avec des systèmes moins encombrants, peu coûteux et plus flexibles. De par leur structure et leurs modes de fonctionnement ils sont intrinsèquement sûrs ce qui leurs permettent de travailler main dans la main avec les humains. Dans ces nouveaux espaces de travail collaboratifs, l'homme peut être inclus dans la boucle comme un agent décisionnel actif. En tant qu'instructeur ou collaborateur il peut influencer le processus décisionnel du robot : on parle de robots collaboratifs (ou cobots). Dans ce nouveau contexte, nous faisons usage de guides virtuels. Ils permettent aux cobots de soulager les efforts physiques et la charge cognitive des opérateurs. Cependant, la définition d'un guide virtuel nécessite souvent une expertise et une modélisation précise de la tâche. Cela restreint leur utilité aux scénarios à contraintes fixes. Pour palier ce problème et améliorer la flexibilité de la programmation du guide virtuel, cette thèse présente une nouvelle approche par démonstration : nous faisons usage de l'apprentissage kinesthésique de façon itérative et construisons le guide virtuel avec une spline 6D. Grâce à cette approche, l'opérateur peut modifier itérativement les guides tout en gardant leur assistance. Cela permet de rendre le processus plus intuitif et naturel ainsi que de réduire la pénibilité. La modification locale d'un guide virtuel en trajectoire est possible par interaction physique avec le robot. L'utilisateur peut déplacer un point clé cartésien ou modifier une portion entière du guide avec une nouvelle démonstration partielle. Nous avons également étendu notre approche aux guides virtuels 6D, où les splines en déplacement sont définies via une interpolation Akima (pour la translation) et une 'interpolation quadratique des quaternions (pour l'orientation). L'opérateur peut initialement définir un guide virtuel en trajectoire, puis utiliser l'assistance en translation pour ne se concentrer que sur la démonstration de l'orientation. Nous avons appliqué notre approche dans deux scénarios industriels utilisant un cobot. Nous avons ainsi démontré l'intérêt de notre méthode qui améliore le confort de l'opérateur lors de la comanipulation.For a very long time, automation was driven by the use of traditional industrial robots placed in cages, programmed to repeat more or less complex tasks at their highest speed and with maximum accuracy. This robot-oriented solution is heavily dependent on hard automation which requires pre-specified fixtures and time consuming programming, hindering robots from becoming flexible and versatile tools. These robots have evolved towards a new generation of small, inexpensive, inherently safe and flexible systems that work hand in hand with humans. In these new collaborative workspaces the human can be included in the loop as an active agent. As a teacher and as a co-worker he can influence the decision-making process of the robot. In this context, virtual guides are an important tool used to assist the human worker by reducing physical effort and cognitive overload during tasks accomplishment. However, the construction of virtual guides often requires expert knowledge and modeling of the task. These limitations restrict the usefulness of virtual guides to scenarios with unchanging constraints. To overcome these challenges and enhance the flexibility of virtual guides programming, this thesis presents a novel approach that allows the worker to create virtual guides by demonstration through an iterative method based on kinesthetic teaching and displacement splines. Thanks to this approach, the worker is able to iteratively modify the guides while being assisted by them, making the process more intuitive and natural while reducing its painfulness. Our approach allows local refinement of virtual guiding trajectories through physical interaction with the robots. We can modify a specific cartesian keypoint of the guide or re- demonstrate a portion. We also extended our approach to 6D virtual guides, where displacement splines are defined via Akima interpolation (for translation) and quadratic interpolation of quaternions (for orientation). The worker can initially define a virtual guiding trajectory and then use the assistance in translation to only concentrate on defining the orientation along the path. We demonstrated that these innovations provide a novel and intuitive solution to increase the human's comfort during human-robot comanipulation in two industrial scenarios with a collaborative robot (cobot)

    Image-guided robots for dot-matrix tumor ablation

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. 203-208).Advances in medical imaging now provides detailed images of solid tumors inside the body and miniaturized energy delivery systems enable tumor destruction through local heating powered by a thin electrode. However, the use of thermal ablation as a first line of treatment is limited due to the difficulty in accurately matching a desired treatment and a limited region of active heating around an electrode. The purpose of this research is to identify and quantify the current limitations of image-guided interventional procedures and subsequently develop a procedure and devices to enable accurate and efficient execution of image-based interventional plans and thus ablation of a tumor of any shape with minimal damage to surrounding tissue. Current limitations of probe placement for ablation therapy were determined by a detailed retrospective study of 50 representative CT-guided procedures. On average, 21 CT scans were performed for a given procedure (range 11-38), with the majority devoted to needle orientation and insertion (mean number of scans was 54%) and trajectory planning (mean number of scans was 19%). A regression analysis yielded that smaller and deeper lesions were associated with a higher number of CT scans for needle orientation and insertion; highlighting the difficulty in targeting. Another challenge identified was repositioning the instrument distal tip within tissue. The first robot is a patient-mounted device that aligns an instrument along a desired trajectory via two motor-actuated concentric, crossed, and partially nested hoops. A carriage rides in the hoops and grips and inserts an instrument via a two degree-of-freedom friction drive. An imagebased point-and-click user interface relates appropriate clicks on the medical images to robot commands. Mounting directly on the patient provides a sufficiently stable and safe platform for actuation and eliminates the need to compensate for chest motion; thereby reducing the cost and complexity compared to other devices. Phantom experiments in a realistic clinical setting demonstrated a mean targeting accuracy of 3.5 mm with an average of five CT scans. The second robot is for repositioning the distal tip of a medical instrument to adjacent points within tissue. The steering mechanism is based on the concept of substantially straightening a pre-curved Nitinol stylet by retracting it into a concentric outer cannula, and re-deploying it at different axial and rotational cannula positions. The proximal end of the cannula is attached to the distal end of a screw-spline that enables it to be translated and rotated with respect to the casing. Translation of the stylet relative to the cannula is achieved with a second concentric, nested smaller diameter screw that is constrained to rotate with the cannula. The robot mechanism is compatible with the CT images, light enough to be supported on a patient's chest or attached to standard stereotactic frames. Targeting experiments in a gelatin phantom demonstrated a mean targeting error of 1.8 mm between the stylet tip and that predicted with a kinematic model. Ultimately, these types of systems are envisioned being used together as part of a highly dexterous patient-mounted positioning platform that can accurately perform ablation of large and irregularly shaped tumors inside medical imaging machines - offering the potential to replace expensive and traumatic surgeries with minimally invasive out-patient procedures.by Conor James Walsh.Ph.D
    • …
    corecore