1,659 research outputs found

    Anthropomorphic Robot Design and User Interaction Associated with Motion

    Get PDF
    Though in its original concept a robot was conceived to have some human-like shape, most robots now in use have specific industrial purposes and do not closely resemble humans. Nevertheless, robots that resemble human form in some way have continued to be introduced. They are called anthropomorphic robots. The fact that the user interface to all robots is now highly mediated means that the form of the user interface is not necessarily connected to the robots form, human or otherwise. Consequently, the unique way the design of anthropomorphic robots affects their user interaction is through their general appearance and the way they move. These robots human-like appearance acts as a kind of generalized predictor that gives its operators, and those with whom they may directly work, the expectation that they will behave to some extent like a human. This expectation is especially prominent for interactions with social robots, which are built to enhance it. Often interaction with them may be mainly cognitive because they are not necessarily kinematically intricate enough for complex physical interaction. Their body movement, for example, may be limited to simple wheeled locomotion. An anthropomorphic robot with human form, however, can be kinematically complex and designed, for example, to reproduce the details of human limb, torso, and head movement. Because of the mediated nature of robot control, there remains in general no necessary connection between the specific form of user interface and the anthropomorphic form of the robot. But their anthropomorphic kinematics and dynamics imply that the impact of their design shows up in the way the robot moves. The central finding of this report is that the control of this motion is a basic design element through which the anthropomorphic form can affect user interaction. In particular, designers of anthropomorphic robots can take advantage of the inherent human-like movement to 1) improve the users direct manual control over robot limbs and body positions, 2) improve users ability to detect anomalous robot behavior which could signal malfunction, and 3) enable users to be better able to infer the intent of robot movement. These three benefits of anthropomorphic design are inherent implications of the anthropomorphic form but they need to be recognized by designers as part of anthropomorphic design and explicitly enhanced to maximize their beneficial impact. Examples of such enhancements are provided in this report. If implemented, these benefits of anthropomorphic design can help reduce the risk of Inadequate Design of Human and Automation Robotic Integration (HARI) associated with the HARI-01 gap by providing efficient and dexterous operator control over robots and by improving operator ability to detect malfunctions and understand the intention of robot movement

    Autofluorescence lifetime augmented reality as a means for real-time robotic surgery guidance in human patients.

    Get PDF
    Due to loss of tactile feedback the assessment of tumor margins during robotic surgery is based only on visual inspection, which is neither significantly sensitive nor specific. Here we demonstrate time-resolved fluorescence spectroscopy (TRFS) as a novel technique to complement the visual inspection of oral cancers during transoral robotic surgery (TORS) in real-time and without the need for exogenous contrast agents. TRFS enables identification of cancerous tissue by its distinct autofluorescence signature that is associated with the alteration of tissue structure and biochemical profile. A prototype TRFS instrument was integrated synergistically with the da Vinci Surgical robot and the combined system was validated in swine and human patients. Label-free and real-time assessment and visualization of tissue biochemical features during robotic surgery procedure, as demonstrated here, not only has the potential to improve the intraoperative decision making during TORS but also other robotic procedures without modification of conventional clinical protocols

    Robotically assisted eye surgery : a haptic master console

    Get PDF
    Vitreo-retinal surgery encompasses the surgical procedures performed on the vitreous humor and the retina. A procedure typically consists of the removal of the vitreous humor, the peeling of a membrane and/or the repair of a retinal detachment. Operations are performed with needle shaped instruments which enter the eye through surgeon made scleral openings. An instrument is moved by hand in four degrees of freedom (three rotations and one translation) through this opening. Two rotations (? and ? ) are for a lateral instrument tip movement. The other two DoFs (z and ?) are the translation and rotation along the instrument axis. Actuation of for example a forceps can be considered as a fifth DoF. Characteristically, the manipulation of delicate, micrometer range thick intraocular tissue is required. Today, eye surgery is performed with a maximum of two instruments simultaneously. The surgeon relies on visual feedback only, since instrument forces are below the human detection limit. A microscope provides the visual feedback. It forces the surgeon to work in a static and non ergonomic body posture. Although the surgeon’s proficiency improves throughout his career, hand tremor may become a problem around his mid-fifties. Robotically assisted surgery with a master-slave system enhances dexterity. The slave with instrument manipulators is placed over the eye. The surgeon controls the instrument manipulators via haptic interfaces at the master. The master and slave are connected by electronic hardware and control software. Implementation of tremor filtering in the control software and downscaling of the hand motion allow prolongation of the surgeon’s career. Furthermore, it becomes possible to do tasks like intraocular cannulation which can not be done by manually performed surgery. This thesis focusses on the master console. Eye surgery procedures are observed in the operating room of different hospitals to gain insight in the requirements for the master. The master console as designed has an adjustable frame, a 3D display and two haptic interfaces with a coarse adjustment arm each. The console is mounted at the head of the operating table and is combined with the slave. It is compact, easy to place and allows the surgeon to have a direct view on and a physical contact with the patient. Furthermore, it fits in today’s manual surgery arrangement. Each haptic interface has the same five degrees of freedom as the instrument inside the eye. Through these interfaces, the surgeon can feel the augmented instrument forces. Downscaling of the hand motion results in a more accurate instrument movement compared to manually performed surgery. Together with the visual feedback, it is like the surgeon grasps the instrument near the tip inside the eye. The similarity between hand motion and motion of the instrument tip as seen on the display results in an intuitive manipulation. Pre-adjustment of the interface is done via the coarse adjustment arm. Mode switching enables to control three or more instruments manipulators with only two interfaces. Two one degree of freedom master-slave systems with force feedback are built to derive the requirements for the haptic interface. Hardware in the loop testing provides valuable insights and shows the possibility of force feedback without the use of force sensors. Two five DoF haptic interfaces are realized for bimanual operation. Each DoF has a position encoder and a force feedback motor. A correct representation of the upscaled instrument forces is only possible if the disturbance forces are low. Actuators are therefore mounted to the fixed world or in the neighborhood of the pivoting point for a low contribution to the inertia. The use of direct drive for ' and and low geared, backdriveable transmissions for the other three DoFs gives a minimum of friction. Disturbance forces are further minimized by a proper cable layout and actuator-amplifier combinations without torque ripple. The similarity in DoFs between vitreo-retinal eye surgery and minimally invasive surgery (MIS) enables the system to be used for MIS as well. Experiments in combination with a slave robot for laparoscopic and thoracoscopic surgery show that an instrument can be manipulated in a comfortable and intuitive way. User experience of surgeons and others is utilized to improve the haptic interface further. A parallel instead of a serial actuation concept for the ' and DoFs reduces the inertia, eliminates the flexible cable connection between frame and motor and allows that the heat of the motor is transferred directly to the frame. A newly designed z-?? module combines the actuation and suspension of the hand held part of the interface and has a three times larger z range than in the first design of the haptic interface

    Recent Advancements in Augmented Reality for Robotic Applications: A Survey

    Get PDF
    Robots are expanding from industrial applications to daily life, in areas such as medical robotics, rehabilitative robotics, social robotics, and mobile/aerial robotics systems. In recent years, augmented reality (AR) has been integrated into many robotic applications, including medical, industrial, human–robot interactions, and collaboration scenarios. In this work, AR for both medical and industrial robot applications is reviewed and summarized. For medical robot applications, we investigated the integration of AR in (1) preoperative and surgical task planning; (2) image-guided robotic surgery; (3) surgical training and simulation; and (4) telesurgery. AR for industrial scenarios is reviewed in (1) human–robot interactions and collaborations; (2) path planning and task allocation; (3) training and simulation; and (4) teleoperation control/assistance. In addition, the limitations and challenges are discussed. Overall, this article serves as a valuable resource for working in the field of AR and robotic research, offering insights into the recent state of the art and prospects for improvement

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Robot Autonomy for Surgery

    Full text link
    Autonomous surgery involves having surgical tasks performed by a robot operating under its own will, with partial or no human involvement. There are several important advantages of automation in surgery, which include increasing precision of care due to sub-millimeter robot control, real-time utilization of biosignals for interventional care, improvements to surgical efficiency and execution, and computer-aided guidance under various medical imaging and sensing modalities. While these methods may displace some tasks of surgical teams and individual surgeons, they also present new capabilities in interventions that are too difficult or go beyond the skills of a human. In this chapter, we provide an overview of robot autonomy in commercial use and in research, and present some of the challenges faced in developing autonomous surgical robots

    Medical Robotics

    Get PDF
    The first generation of surgical robots are already being installed in a number of operating rooms around the world. Robotics is being introduced to medicine because it allows for unprecedented control and precision of surgical instruments in minimally invasive procedures. So far, robots have been used to position an endoscope, perform gallbladder surgery and correct gastroesophogeal reflux and heartburn. The ultimate goal of the robotic surgery field is to design a robot that can be used to perform closed-chest, beating-heart surgery. The use of robotics in surgery will expand over the next decades without any doubt. Minimally Invasive Surgery (MIS) is a revolutionary approach in surgery. In MIS, the operation is performed with instruments and viewing equipment inserted into the body through small incisions created by the surgeon, in contrast to open surgery with large incisions. This minimizes surgical trauma and damage to healthy tissue, resulting in shorter patient recovery time. The aim of this book is to provide an overview of the state-of-art, to present new ideas, original results and practical experiences in this expanding area. Nevertheless, many chapters in the book concern advanced research on this growing area. The book provides critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies. This book is certainly a small sample of the research activity on Medical Robotics going on around the globe as you read it, but it surely covers a good deal of what has been done in the field recently, and as such it works as a valuable source for researchers interested in the involved subjects, whether they are currently “medical roboticists” or not

    Computer- and robot-assisted Medical Intervention

    Full text link
    Medical robotics includes assistive devices used by the physician in order to make his/her diagnostic or therapeutic practices easier and more efficient. This chapter focuses on such systems. It introduces the general field of Computer-Assisted Medical Interventions, its aims, its different components and describes the place of robots in that context. The evolutions in terms of general design and control paradigms in the development of medical robots are presented and issues specific to that application domain are discussed. A view of existing systems, on-going developments and future trends is given. A case-study is detailed. Other types of robotic help in the medical environment (such as for assisting a handicapped person, for rehabilitation of a patient or for replacement of some damaged/suppressed limbs or organs) are out of the scope of this chapter.Comment: Handbook of Automation, Shimon Nof (Ed.) (2009) 000-00

    Cable-driven parallel robot for transoral laser phonosurgery

    Get PDF
    Transoral laser phonosurgery (TLP) is a common surgical procedure in otolaryngology. Currently, two techniques are commonly used: free beam and fibre delivery. For free beam delivery, in combination with laser scanning techniques, accurate laser pattern scanning can be achieved. However, a line-of-sight to the target is required. A suspension laryngoscope is adopted to create a straight working channel for the scanning laser beam, which could introduce lesions to the patient, and the manipulability and ergonomics are poor. For the fibre delivery approach, a flexible fibre is used to transmit the laser beam, and the distal tip of the laser fibre can be manipulated by a flexible robotic tool. The issues related to the limitation of the line-of-sight can be avoided. However, the laser scanning function is currently lost in this approach, and the performance is inferior to that of the laser scanning technique in the free beam approach. A novel cable-driven parallel robot (CDPR), LaryngoTORS, has been developed for TLP. By using a curved laryngeal blade, a straight suspension laryngoscope will not be necessary to use, which is expected to be less traumatic to the patient. Semi-autonomous free path scanning can be executed, and high precision and high repeatability of the free path can be achieved. The performance has been verified in various bench and ex vivo tests. The technical feasibility of the LaryngoTORS robot for TLP was considered and evaluated in this thesis. The LaryngoTORS robot has demonstrated the potential to offer an acceptable and feasible solution to be used in real-world clinical applications of TLP. Furthermore, the LaryngoTORS robot can combine with fibre-based optical biopsy techniques. Experiments of probe-based confocal laser endomicroscopy (pCLE) and hyperspectral fibre-optic sensing were performed. The LaryngoTORS robot demonstrates the potential to be utilised to apply the fibre-based optical biopsy of the larynx.Open Acces
    • …
    corecore