1,713 research outputs found

    Remote Access and Computerized User Control of Robotic Micromanipulators

    Get PDF
    Nano- and micromanipulators are critical research tools in numerous fields including micro-manufacturing and disease study. Despite their importance, nano- and micromanipulation systems remain inaccessible to many groups due to price and lack of portability. An intuitive and remotely accessible manipulation system helps mitigate this access problem. Previously, optimal control hardware for single-probe manipulation and the effect of latency on user performance were not well understood. Remote access demands full computerization; graphical user interfaces with networking capabilities were developed to fulfill this requirement and allow the use of numerous hardware controllers. Virtual environments were created to simulate the use of a manipulator with full parametric control and measurement capabilities. Users completed simulated tasks with each device and were surveyed about their perceptions. User performance with a commercial manipulator controller was exceeded by performance with both a computer mouse and pen tablet. Latency was imposed within the virtual environment to study it’s effects and establish guidelines as to which latency ranges are acceptable for long-range remote manipulation. User performance began to degrade noticeably at 100 ms and severely at 400 ms and performance with the mouse degraded the least as latency increased. A computer vision system for analyzing carbon nanotube arrays was developed so the computation time could be compared to acceptable system latency. The system characterizes the arrays to a high degree of accuracy and most of the measurement types of obtainable fast enough for real-time analysis

    Control of Cable Robots for Construction Applications

    Get PDF

    Robotic Assisted Microsurgery (RAMS): Application in Plastic Surgery

    Get PDF

    Doctor of Philosophy

    Get PDF
    dissertationIn this dissertation, we present methods for intuitive telemanipulation of manipulators that use piezoelectric stick-slip actuators (PSSAs). Commercial micro/nano-manipulators, which utilize PSSAs to achieve high precision over a large workspace, are typically controlled by a human operator at the joint level, leading to unintuitive and time-consuming telemanipulation. Prior work has considered the use of computer-vision-feedback to close a control loop for improved performance, but computer-vision-feedback is not a viable option for many end users. We discuss how open-loop models of the micro/nano-manipulator can be used to achieve desired end-effector movements, and we explain the process of obtaining open-loop models. We propose a rate-control telemanipulation method that utilizes the obtained model, and we experimentally quantify the effectiveness of the method using a common commercial manipulator (the Kleindiek MM3A). The utility of open-loop control methods for PSSAs with a human in the loop depends directly on the accuracy of the open-loop models of the manipulator. Prior research has shown that modeling of piezoelectric actuators is not a trivial task as they are known to suffer from nonlinearities that degrade their performance. We study the effect of static (non-inertial) loads on a prismatic and a rotary PSSA, and obtain a model relating the step size of the actuator to the load. The actuator-specific parameters of the model are calibrated by taking measurements in specific configurations of the manipulator. Results comparing the obtained model to experimental data are presented. PSSAs have properties that make them desirable over traditional DC-motor actuators for use in retinal surgery. We present a telemanipulation system for retinal surgery that uses a full range of existing disposable instruments. The system uses a PSSA-based manipulator that is compact and light enough that it could reasonably be made head-mounted to passively compensate for head movements. Two mechanisms are presented that enable the system to use existing disposable actuated instruments, and an instrument adapter enables quick-change of instruments during surgery. A custom stylus for a haptic interface enables intuitive and ergonomic telemanipulation of actuated instruments. Experimental results with a force-sensitive phantom eye show that telemanipulated surgery results in reduced forces on the retina compared to manual surgery, and training with the system results in improved performance. Finally, we evaluate operator efficiency with different haptic-interface kinematics for telemanipulated retinal surgery. Surgical procedures of the retina require precise manipulation of instruments inserted through trocars in the sclera. Telemanipulated robotic systems have been developed to improve retinal surgery, but there is not a unique mapping of the motions of the surgeon's hand to the lower-dimensional motions of the instrument through the trocar. We study operator performance during a precision positioning task on a force-sensing phantom retina, reminiscent of telemanipulated retinal surgery, with three common haptic-interface kinematics implemented in software on a PHANTOM Premium 6DOF haptic interface. Results from a study with 12 human subjects show that overall performance is best with the kinematics that represent a compact and inexpensive option, and that subjects' subjective preference agrees with the objective performance results

    HERO Glove

    Get PDF
    Non-repetitive manipulation tasks that are easy for humans to perform are difficult for autonomous robots to execute. The Haptic Exoskeletal Robot Operator (HERO) Glove is a system designed for users to remotely control robot manipulators whilst providing sensory feedback to the user. This realistic haptic feedback is achieved through the use of toroidal air-filled actuators that stiffen up around the user’s fingers. Tactile sensor data is sent from the robot to the HERO Glove, where it is used to vary the pressure in the toroidal actuators to simulate the sense of touch. Curvature sensors and inertial measurement units are used to capture the glove’s pose to control the robot

    Robotic manipulators for single access surgery

    Get PDF
    This thesis explores the development of cooperative robotic manipulators for enhancing surgical precision and patient outcomes in single-access surgery and, specifically, Transanal Endoscopic Microsurgery (TEM). During these procedures, surgeons manipulate a heavy set of instruments via a mechanical clamp inserted in the patient’s body through a surgical port, resulting in imprecise movements, increased patient risks, and increased operating time. Therefore, an articulated robotic manipulator with passive joints is initially introduced, featuring built-in position and force sensors in each joint and electronic joint brakes for instant lock/release capability. The articulated manipulator concept is further improved with motorised joints, evolving into an active tool holder. The joints allow the incorporation of advanced robotic capabilities such as ultra-lightweight gravity compensation and hands-on kinematic reconfiguration, which can optimise the placement of the tool holder in the operating theatre. Due to the enhanced sensing capabilities, the application of the active robotic manipulator was further explored in conjunction with advanced image guidance approaches such as endomicroscopy. Recent advances in probe-based optical imaging such as confocal endomicroscopy is making inroads in clinical uses. However, the challenging manipulation of imaging probes hinders their practical adoption. Therefore, a combination of the fully cooperative robotic manipulator with a high-speed scanning endomicroscopy instrument is presented, simplifying the incorporation of optical biopsy techniques in routine surgical workflows. Finally, another embodiment of a cooperative robotic manipulator is presented as an input interface to control a highly-articulated robotic instrument for TEM. This master-slave interface alleviates the drawbacks of traditional master-slave devices, e.g., using clutching mechanics to compensate for the mismatch between slave and master workspaces, and the lack of intuitive manipulation feedback, e.g. joint limits, to the user. To address those drawbacks a joint-space robotic manipulator is proposed emulating the kinematic structure of the flexible robotic instrument under control.Open Acces

    Development of Object-Based Teleoperator Control for Unstructured Applications

    Get PDF
    For multi-fingered end effectors in unstructured applications, the main issues are control in the presence of uncertainties and providing grasp stability and object manipulability. The suggested concept in this thesis is object based teleoperator control which provides an intuitive way to control the robot in terms of the grasped object and reduces the operator\u27s conceptual constraints. The general control law is developed using a hierarchical control structure, i.e., human interface I gross motion control level in teleoperation control and fine motion control/object grasp stability in autonomous control. The gross motion control is required to provide the position/orientation of the Super Object (SO), and the sufficient grasping force to the fine motion control. Impedance control is applied to the gross motion control to respond to the environmental forces. The fine motion control consists of serially connecting the finger in position control and the Fingertip Actuation System (FAS) in force control. The FAS has a higher bandwidth response than does the finger actuation system and operates near the center of its joint range. The finger motion controller attempts not only to track the displacement of the FAS but also to provide an FAS centering action. Simulation experiments in both gross and fine motion control are performed. The integrated gross / flue motion control is implemented using the planar configuration of PUMA 560. The results show that the desired contact force can be maintained in the direction of FAS motion. The mathematical proof of system stability and the extension to spatial systems are required to complete the research

    Robocatch: Design and Making of a Hand-Held Spillage-Free Specimen Retrieval Robot for Laparoscopic Surgery

    Get PDF
    Specimen retrieval is an important step in laparoscopy, a minimally invasive surgical procedure performed to diagnose and treat a myriad of medical pathologies in fields ranging from gynecology to oncology. Specimen retrieval bags (SRBs) are used to facilitate this task, while minimizing contamination of neighboring tissues and port-sites in the abdominal cavity. This manual surgical procedure requires usage of multiple ports, creating a traffic of simultaneous operations of multiple instruments in a limited shared workspace. The skill-demanding nature of this procedure makes it time-consuming, leading to surgeons’ fatigue and operational inefficiency. This thesis presents the design and making of RoboCatch, a novel hand-held robot that aids a surgeon in performing spillage-free retrieval of operative specimens in laparoscopic surgery. The proposed design significantly modifies and extends conventional instruments that are currently used by surgeons for the retrieval task: The core instrumentation of RoboCatch comprises a webbed three-fingered grasper and atraumatic forceps that are concentrically situated in a folded configuration inside a trocar. The specimen retrieval task is achieved in six stages: 1) The trocar is introduced into the surgical site through an instrument port, 2) the three webbed fingers slide out of the tube and simultaneously unfold in an umbrella like-fashion, 3) the forceps slide toward, and grasp, the excised specimen, 4) the forceps retract the grasped specimen into the center of the surrounding grasper, 5) the grasper closes to achieve a secured containment of the specimen, and 6) the grasper, along with the contained specimen, is manually removed from the abdominal cavity. The resulting reduction in the number of active ports reduces obstruction of the port-site and increases the procedure’s efficiency. The design process was initiated by acquiring crucial parameters from surgeons and creating a design table, which informed the CAD modeling of the robot structure and selection of actuation units and fabrication material. The robot prototype was first examined in CAD simulation and then fabricated using an Objet30 Prime 3D printer. Physical validation experiments were conducted to verify the functionality of different mechanisms of the robot. Further, specimen retrieval experiments were conducted with porcine meat samples to test the feasibility of the proposed design. Experimental results revealed that the robot was capable of retrieving masses of specimen ranging from 1 gram to 50 grams. The making of RoboCatch represents a significant step toward advancing the frontiers of hand-held robots for performing specimen retrieval tasks in minimally invasive surgery

    Adaptive physical human-robot interaction (PHRI) with a robotic nursing assistant.

    Get PDF
    Recently, more and more robots are being investigated for future applications in health-care. For instance, in nursing assistance, seamless Human-Robot Interaction (HRI) is very important for sharing workspaces and workloads between medical staff, patients, and robots. In this thesis we introduce a novel robot - the Adaptive Robot Nursing Assistant (ARNA) and its underlying components. ARNA has been designed specifically to assist nurses with day-to-day tasks such as walking patients, pick-and-place item retrieval, and routine patient health monitoring. An adaptive HRI in nursing applications creates a positive user experience, increase nurse productivity and task completion rates, as reported by experimentation with human subjects. ARNA has been designed to include interface devices such as tablets, force sensors, pressure-sensitive robot skins, LIDAR and RGBD camera. These interfaces are combined with adaptive controllers and estimators within a proposed framework that contains multiple innovations. A research study was conducted on methods of deploying an ideal HumanMachine Interface (HMI), in this case a tablet-based interface. Initial study points to the fact that a traded control level of autonomy is ideal for tele-operating ARNA by a patient. The proposed method of using the HMI devices makes the performance of a robot similar for both skilled and un-skilled workers. A neuro-adaptive controller (NAC), which contains several neural-networks to estimate and compensate for system non-linearities, was implemented on the ARNA robot. By linearizing the system, a cross-over usability condition is met through which humans find it more intuitive to learn to use the robot in any location of its workspace, A novel Base-Sensor Assisted Physical Interaction (BAPI) controller is introduced in this thesis, which utilizes a force-torque sensor at the base of the ARNA robot manipulator to detect full body collisions, and make interaction safer. Finally, a human-intent estimator (HIE) is proposed to estimate human intent while the robot and user are physically collaborating during certain tasks such as adaptive walking. A NAC with HIE module was validated on a PR2 robot through user studies. Its implementation on the ARNA robot platform can be easily accomplished as the controller is model-free and can learn robot dynamics online. A new framework, Directive Observer and Lead Assistant (DOLA), is proposed for ARNA which enables the user to interact with the robot in two modes: physically, by direct push-guiding, and remotely, through a tablet interface. In both cases, the human is being “observed” by the robot, then guided and/or advised during interaction. If the user has trouble completing the given tasks, the robot adapts their repertoire to lead users toward completing goals. The proposed framework incorporates interface devices as well as adaptive control systems in order to facilitate a higher performance interaction between the user and the robot than was previously possible. The ARNA robot was deployed and tested in a hospital environment at the School of Nursing of the University of Louisville. The user-experience tests were conducted with the help of healthcare professionals where several metrics including completion time, rate and level of user satisfaction were collected to shed light on the performance of various components of the proposed framework. The results indicate an overall positive response towards the use of such assistive robot in the healthcare environment. The analysis of these gathered data is included in this document. To summarize, this research study makes the following contributions: Conducting user experience studies with the ARNA robot in patient sitter and walker scenarios to evaluate both physical and non-physical human-machine interfaces. Evaluation and Validation of Human Intent Estimator (HIE) and Neuro-Adaptive Controller (NAC). Proposing the novel Base-Sensor Assisted Physical Interaction (BAPI) controller. Building simulation models for packaged tactile sensors and validating the models with experimental data. Description of Directive Observer and Lead Assistance (DOLA) framework for ARNA using adaptive interfaces

    I-Support: A robotic platform of an assistive bathing robot for the elderly population

    Get PDF
    In this paper we present a prototype integrated robotic system, the I-Support bathing robot, that aims at supporting new aspects of assisted daily-living activities on a real-life scenario. The paper focuses on describing and evaluating key novel technological features of the system, with the emphasis on cognitive human–robot interaction modules and their evaluation through a series of clinical validation studies. The I-Support project on its whole has envisioned the development of an innovative, modular, ICT-supported service robotic system that assists frail seniors to safely and independently complete an entire sequence of physically and cognitively demanding bathing tasks, such as properly washing their back and their lower limbs. A variety of innovative technologies have been researched and a set of advanced modules of sensing, cognition, actuation and control have been developed and seamlessly integrated to enable the system to adapt to the target population abilities. These technologies include: human activity monitoring and recognition, adaptation of a motorized chair for safe transfer of the elderly in and out the bathing cabin, a context awareness system that provides full environmental awareness, as well as a prototype soft robotic arm and a set of user-adaptive robot motion planning and control algorithms. This paper focuses in particular on the multimodal action recognition system, developed to monitor, analyze and predict user actions with a high level of accuracy and detail in real-time, which are then interpreted as robotic tasks. In the same framework, the analysis of human actions that have become available through the project’s multimodal audio–gestural dataset, has led to the successful modeling of Human–Robot Communication, achieving an effective and natural interaction between users and the assistive robotic platform. In order to evaluate the I-Support system, two multinational validation studies were conducted under realistic operating conditions in two clinical pilot sites. Some of the findings of these studies are presented and analyzed in the paper, showing good results in terms of: (i) high acceptability regarding the system usability by this particularly challenging target group, the elderly end-users, and (ii) overall task effectiveness of the system in different operating modes
    • …
    corecore