49 research outputs found

    Medical robots for MRI guided diagnosis and therapy

    No full text
    Magnetic Resonance Imaging (MRI) provides the capability of imaging tissue with fine resolution and superior soft tissue contrast, when compared with conventional ultrasound and CT imaging, which makes it an important tool for clinicians to perform more accurate diagnosis and image guided therapy. Medical robotic devices combining the high resolution anatomical images with real-time navigation, are ideal for precise and repeatable interventions. Despite these advantages, the MR environment imposes constraints on mechatronic devices operating within it. This thesis presents a study on the design and development of robotic systems for particular MR interventions, in which the issue of testing the MR compatibility of mechatronic components, actuation control, kinematics and workspace analysis, and mechanical and electrical design of the robot have been investigated. Two types of robotic systems have therefore been developed and evaluated along the above aspects. (i) A device for MR guided transrectal prostate biopsy: The system was designed from components which are proven to be MR compatible, actuated by pneumatic motors and ultrasonic motors, and tracked by optical position sensors and ducial markers. Clinical trials have been performed with the device on three patients, and the results reported have demonstrated its capability to perform needle positioning under MR guidance, with a procedure time of around 40mins and with no compromised image quality, which achieved our system speci cations. (ii) Limb positioning devices to facilitate the magic angle effect for diagnosis of tendinous injuries: Two systems were designed particularly for lower and upper limb positioning, which are actuated and tracked by the similar methods as the first device. A group of volunteers were recruited to conduct tests to verify the functionality of the systems. The results demonstrate the clear enhancement of the image quality with an increase in signal intensity up to 24 times in the tendon tissue caused by the magic angle effect, showing the feasibility of the proposed devices to be applied in clinical diagnosis

    The Hand-Held Force Magnifier: Surgical Tools to Augment the Sense of Touch

    Get PDF
    Modern surgeons routinely perform procedures with noisy, sub-threshold, or obscured visual and haptic feedback,either due to the necessary approach, or because the systems on which they are operating are exceeding delicate. For example, in cataract extraction, ophthalmic surgeons must peel away thin membranes in order to access and replace the lens of the eye. Elsewhere, dissection is now commonly performed with energy-delivering tools – rather than sharp blades – and damage to deep structures is possible if tissue contact is not well controlled. Surgeons compensate for their lack of tactile sensibility by relying solely on visual feedback, observing tissue deformation and other visual cues through surgical microscopes or cameras. Using visual information alone can make a procedure more difficult, because cognitive mediation is required to convert visual feedback into motor action. We call this the “haptic problem” in surgery because the human sensorimotor loop is deprived of critical tactile afferent information, increasing the chance for intraoperative injury and requiring extensive training before clinicians reach independent proficiency. Tools that enhance the surgeon’s direct perception of tool-tissue forces can therefore potentially reduce the risk of iatrogenic complications and improve patient outcomes. Towards this end, we have developed and characterized a new robotic surgical tool, the Hand-Held Force Magnifier (HHFM), which amplifies forces at the tool tip so they may be readily perceived by the user, a paradigm we call “in-situ” force feedback. In this dissertation, we describe the development of successive generations of HHFM prototypes, and the evaluation of a proposed human-in-the-loop control framework using the methods of psychophysics. Using these techniques, we have verified that our tool can reduce sensory perception thresholds, augmenting the user’s abilities beyond what is normally possible. Further, we have created models of human motor control in surgically relevant tasks such as membrane puncture, which have shown to be sensitive to push-pull direction and handedness effects. Force augmentation has also demonstrated improvements to force control in isometric force generation tasks. Finally, in support of future psychophysics work, we have developed an inexpensive, high-bandwidth, single axis haptic renderer using a commercial audio speaker

    組立作業のための6軸力覚センサを用いた接触点検出

    Get PDF

    Improvement of a Multi-Body Collision Computation Framework and Its Application to Robot (Self-)Collision Avoidance

    Get PDF
    One of the fundamental demands on robotic systems is a safe interaction with their environment. In order to fulfill that condition, both collisions with obstacles and own structure have to be avoided. This problem has been addressed before at the German Aerospace Center (DLR) through the use of different algorithms. In this work, a novel solution that differentiates itself from previous implementations due to its geometry-independent, flexible thread structure and computationally robust nature is presented. In a first step, in order to achieve self-collision avoidance, collision detection must be handled. In this line, the Robotics and Mechatronics Center of the DLR developed its own version of the Voxmap-Pointshell (VPS) Algorithm. This penalty based collision computation algorithm uses two types of haptic data structures for each pair of potentially colliding objects in order to detect contact points and compute forces of interfering virtual objects; voxelmaps and pointshells. Prior to the work presented, a framework for multi-body collision detection already existed. However, it was not designed nor optimized to handle mechanisms. This thesis resents a framework that handles collision detection, force computation and physics processing of multi-body virtual realities in real-time integrating the DLR VPS Algorithm implementation. Due to the high number of available robots and mechanisms, a method that is both robust and generic enough to withstand the forthcoming developments would be desirable. In this work, an input configuration file detailing the mechanism’s structure is used, based on the Denavit-Hartenberg convention, so that any type of robotic system or virtual object can use this method without any loss of validity. Experiments to prove the validity of this work have been performed both on DLR’s HUG simulator and on DLR’s HUG haptic device, composed of two DLR-KUKA light weight robots (LWRs)

    Robotically assisted eye surgery : a haptic master console

    Get PDF
    Vitreo-retinal surgery encompasses the surgical procedures performed on the vitreous humor and the retina. A procedure typically consists of the removal of the vitreous humor, the peeling of a membrane and/or the repair of a retinal detachment. Operations are performed with needle shaped instruments which enter the eye through surgeon made scleral openings. An instrument is moved by hand in four degrees of freedom (three rotations and one translation) through this opening. Two rotations (? and ? ) are for a lateral instrument tip movement. The other two DoFs (z and ?) are the translation and rotation along the instrument axis. Actuation of for example a forceps can be considered as a fifth DoF. Characteristically, the manipulation of delicate, micrometer range thick intraocular tissue is required. Today, eye surgery is performed with a maximum of two instruments simultaneously. The surgeon relies on visual feedback only, since instrument forces are below the human detection limit. A microscope provides the visual feedback. It forces the surgeon to work in a static and non ergonomic body posture. Although the surgeon’s proficiency improves throughout his career, hand tremor may become a problem around his mid-fifties. Robotically assisted surgery with a master-slave system enhances dexterity. The slave with instrument manipulators is placed over the eye. The surgeon controls the instrument manipulators via haptic interfaces at the master. The master and slave are connected by electronic hardware and control software. Implementation of tremor filtering in the control software and downscaling of the hand motion allow prolongation of the surgeon’s career. Furthermore, it becomes possible to do tasks like intraocular cannulation which can not be done by manually performed surgery. This thesis focusses on the master console. Eye surgery procedures are observed in the operating room of different hospitals to gain insight in the requirements for the master. The master console as designed has an adjustable frame, a 3D display and two haptic interfaces with a coarse adjustment arm each. The console is mounted at the head of the operating table and is combined with the slave. It is compact, easy to place and allows the surgeon to have a direct view on and a physical contact with the patient. Furthermore, it fits in today’s manual surgery arrangement. Each haptic interface has the same five degrees of freedom as the instrument inside the eye. Through these interfaces, the surgeon can feel the augmented instrument forces. Downscaling of the hand motion results in a more accurate instrument movement compared to manually performed surgery. Together with the visual feedback, it is like the surgeon grasps the instrument near the tip inside the eye. The similarity between hand motion and motion of the instrument tip as seen on the display results in an intuitive manipulation. Pre-adjustment of the interface is done via the coarse adjustment arm. Mode switching enables to control three or more instruments manipulators with only two interfaces. Two one degree of freedom master-slave systems with force feedback are built to derive the requirements for the haptic interface. Hardware in the loop testing provides valuable insights and shows the possibility of force feedback without the use of force sensors. Two five DoF haptic interfaces are realized for bimanual operation. Each DoF has a position encoder and a force feedback motor. A correct representation of the upscaled instrument forces is only possible if the disturbance forces are low. Actuators are therefore mounted to the fixed world or in the neighborhood of the pivoting point for a low contribution to the inertia. The use of direct drive for ' and and low geared, backdriveable transmissions for the other three DoFs gives a minimum of friction. Disturbance forces are further minimized by a proper cable layout and actuator-amplifier combinations without torque ripple. The similarity in DoFs between vitreo-retinal eye surgery and minimally invasive surgery (MIS) enables the system to be used for MIS as well. Experiments in combination with a slave robot for laparoscopic and thoracoscopic surgery show that an instrument can be manipulated in a comfortable and intuitive way. User experience of surgeons and others is utilized to improve the haptic interface further. A parallel instead of a serial actuation concept for the ' and DoFs reduces the inertia, eliminates the flexible cable connection between frame and motor and allows that the heat of the motor is transferred directly to the frame. A newly designed z-?? module combines the actuation and suspension of the hand held part of the interface and has a three times larger z range than in the first design of the haptic interface

    Haptics Rendering and Applications

    Get PDF
    There has been significant progress in haptic technologies but the incorporation of haptics into virtual environments is still in its infancy. A wide range of the new society's human activities including communication, education, art, entertainment, commerce and science would forever change if we learned how to capture, manipulate and reproduce haptic sensory stimuli that are nearly indistinguishable from reality. For the field to move forward, many commercial and technological barriers need to be overcome. By rendering how objects feel through haptic technology, we communicate information that might reflect a desire to speak a physically- based language that has never been explored before. Due to constant improvement in haptics technology and increasing levels of research into and development of haptics-related algorithms, protocols and devices, there is a belief that haptics technology has a promising future

    Nonlinear control of a seven degrees-of-freedom exoskeleton robot arm

    Get PDF
    Advances in the field of robotics have allowed increasingly integrating robotic devices for rehabilitation of physical disabilities. This research work is encompassed into the field of rehabilitation robotics; it presents the development of the robot ETS-MARSE, a seven degrees-of-freedom exoskeleton designed to be worn in the human arm. The developments include the study and implementation of a relatively novel nonlinear control approach, as well as different rehabilitation schemes. One of the characteristics of a rehabilitation robot is that it deals with a wide number of patients that have different biomechanical and physiological conditions. The implementation of the nonlinear control technique known as Virtual Decomposition Control addresses this issue with its internal parameters’ adaptation that presents a robust behavior to different characteristics of the robot users. Besides, this technique simplifies the complexity of high degree-of-freedom robots by its innovative sub-systems decomposition. All of above, while ensuring the system asymptotic stability and excellent trajectory tracking. Between the different rehabilitation schemes, we can mention: passive, active-assistive and active rehabilitation. The first one follows predefined trajectories and relies on the efficiency of the controller. The two other schemes require understanding the user’s intention of movement and take an action in order to guide, restrain, correct or follow it. For this purpose, we present an approach that utilizes a force sensor as the human-robot interface in order to transform, via an admittance function, the forces that the user exert to the robot end-effector (handle), and execute active-assisted or active rehabilitation. Finally among the main developments of this work, an approach is presented in which the need of a force sensor to perform some active rehabilitation tasks is removed. By means of a nonlinear observer, the interaction forces are estimated and the user’s intention of movement followed. Experimental results show the effectiveness of all the proposed approaches. All the tests involving humans were tested with healthy subjects. Trajectory tracking of the robot is executed in joint space; some trajectories are given in Cartesian space and transformed to joint space by means of the pseudoinverse of the Jacobian technique. However this option is limited; a mandatory next step to improve many functionalities of the robot is to solve its inverse kinematics. Between other progresses that are in development, is an approach to process electromyographic signals in order to obtain information from the robot’s users. First results on this methodology are presented. Teleoperation and haptic capabilities are also in the initial stage of development

    Virtual reality based upper extremity stroke rehabilitation system.

    Get PDF
    Some studies suggest that the use of Virtual Reality technologies as an assistive technology in combination with conventional therapies can achieve improved results in post stroke rehabilitation. Despite the wealth of ongoing research applied to trying to build a virtual reality based system for upper extremity rehabilitation, there still exists a strong need for a training platform that would provide whole arm rehabilitation. In order to be practical such a system should ideally be low cost (affordable or inexpensive for a common individual or household) and involve minimal therapist involvement. This research outlines some of the applications of virtual reality that have undergone clinical trials with patients suffering from upper extremity functional motor deficits. Furthermore, this thesis presents the design, development, implementation and feasibility testing of a Virtual Reality-based Upper Extremity Stroke Rehabilitation System. Motion sensing technology has been used to capture the real time movement data of the upper extremity and a virtual reality glove has been used to track the flexion/extension of the fingers. A virtual room has been designed with an avatar of the human arm to allow a variety of training tasks to be accomplished. An interface has been established to incorporate the real time data from the hardware to a virtual scene running on a PC. Three different training scenes depicting a real world scenario have been designed. These have been used to analyze the motion patterns of the users while executing the tasks in the virtual environment simulation. A usability study with the healthy volunteers performing the training tasks have been undertaken to study the ease of use, ease of learning and improved motivation in the virtual environment. Moreover this system costing approximately 2725 pounds would provide home based rehabilitation of the whole arm augmenting conventional therapy on a positive level. Statistical analysis of the data and the evaluation studies with the self report methodologies suggests the feasibility of the system for post stroke rehabilitation in home environment

    Medical Robotics

    Get PDF
    The first generation of surgical robots are already being installed in a number of operating rooms around the world. Robotics is being introduced to medicine because it allows for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, robots have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. The use of robotics in surgery will expand over the next decades without any doubt. Minimally Invasive Surgery (MIS) is a revolutionary approach in surgery. In MIS, the operation is performed with instruments and viewing equipment inserted into the body through small incisions created by the surgeon, in contrast to open surgery with large incisions. This minimizes surgical trauma and damage to healthy tissue, resulting in shorter patient recovery time. The aim of this book is to provide an overview of the state-of-art, to present new ideas, original results and practical experiences in this expanding area. Nevertheless, many chapters in the book concern advanced research on this growing area. The book provides critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies. This book is certainly a small sample of the research activity on Medical Robotics going on around the globe as you read it, but it surely covers a good deal of what has been done in the field recently, and as such it works as a valuable source for researchers interested in the involved subjects, whether they are currently “medical roboticists” or not
    corecore