139 research outputs found

    StandAlone Surgical Haptic Arm (SASHA)

    Get PDF
    A standalone surgical arm for performing Minimally Invasive Robotic Surgery (MIRS) with standard da Vinci Si tools has been developed. Force feedback is now possible with the feedback from torque sensors used to measure the forces acting upon the tool tip. The mechanical arm and a control system capable of driving the arm and reporting force information to the user via haptic feedback has been designed and fabricated. This arm will be used as a platform for research on the performance of telesurgery as a function of various haptic mappings and artificial latencies

    High latency unmanned ground vehicle teleoperation enhancement by presentation of estimated future through video transformation

    Get PDF
    Long-distance, high latency teleoperation tasks are difficult, highly stressful for teleoperators, and prone to over-corrections, which can lead to loss of control. At higher latencies, or when teleoperating at higher vehicle speed, the situation becomes progressively worse. To explore potential solutions, this research work investigates two 2D visual feedback-based assistive interfaces (sliding-only and sliding-and-zooming windows) that apply simple but effective video transformations to enhance teleoperation. A teleoperation simulator that can replicate teleoperation scenarios affected by high and adjustable latency has been developed to explore the effectiveness of the proposed assistive interfaces. Three image comparison metrics have been used to fine-tune and optimise the proposed interfaces. An operator survey was conducted to evaluate and compare performance with and without the assistance. The survey has shown that a 900ms latency increases task completion time by up to 205% for an on-road and 147 % for an off-road driving track. Further, the overcorrection-induced oscillations increase by up to 718 % with this level of latency. The survey has shown the sliding-only video transformation reduces the task completion time by up to 25.53 %, and the sliding-and-zooming transformation reduces the task completion time by up to 21.82 %. The sliding-only interface reduces the oscillation count by up to 66.28 %, and the sliding-and-zooming interface reduces it by up to 75.58 %. The qualitative feedback from the participants also shows that both types of assistive interfaces offer better visual situational awareness, comfort, and controllability, and significantly reduce the impact of latency and intermittency on the teleoperation task

    Elicitation of trustworthiness requirements for highly dexterous teleoperation systems with signal latency

    Get PDF
    IntroductionTeleoperated robotic manipulators allow us to bring human dexterity and cognition to hard-to-reach places on Earth and in space. In long-distance teleoperation, however, the limits of the speed of light results in an unavoidable and perceivable signal delay. The resultant disconnect between command, action, and feedback means that systems often behave unexpectedly, reducing operators' trust in their systems. If we are to widely adopt telemanipulation technology in high-latency applications, we must identify and specify what would make these systems trustworthy.MethodsIn this requirements elicitation study, we present the results of 13 interviews with expert operators of remote machinery from four different application areas—nuclear reactor maintenance, robot-assisted surgery, underwater exploration, and ordnance disposal—exploring which features, techniques, or experiences lead them to trust their systems.ResultsWe found that across all applications, except for surgery, the top-priority requirement for developing trust is that operators must have a comprehensive engineering understanding of the systems' capabilities and limitations. The remaining requirements can be summarized into three areas: improving situational awareness, facilitating operator training, and familiarity, and easing the operator's cognitive load.DiscussionWhile the inclusion of technical features to assist the operators was welcomed, these were given lower priority than non-technical, user-centric approaches. The signal delays in the participants' systems ranged from none perceived to 1 min, and included examples of successful dexterous telemanipulation for maintenance tasks with a 2 s delay. As this is comparable to Earth-to-orbit and Earth-to-Moon delays, the requirements discussed could be transferable to telemanipulation tasks in space

    A 360 VR and Wi-Fi Tracking Based Autonomous Telepresence Robot for Virtual Tour

    Get PDF
    This study proposes a novel mobile robot teleoperation interface that demonstrates the applicability of a robot-aided remote telepresence system with a virtual reality (VR) device to a virtual tour scenario. To improve realism and provide an intuitive replica of the remote environment for the user interface, the implemented system automatically moves a mobile robot (viewpoint) while displaying a 360-degree live video streamed from the robot to a VR device (Oculus Rift). Upon the user choosing a destination location from a given set of options, the robot generates a route based on a shortest path graph and travels along that the route using a wireless signal tracking method that depends on measuring the direction of arrival (DOA) of radio signals. This paper presents an overview of the system and architecture, and discusses its implementation aspects. Experimental results show that the proposed system is able to move to the destination stably using the signal tracking method, and that at the same time, the user can remotely control the robot through the VR interface

    Human factors issues in telerobotic decommissioning of legacy nuclear facilities

    Get PDF
    This thesis investigates the problems of enabling human workers to control remote robots, to achieve decommissioning of contaminated nuclear facilities, which are hazardous for human workers to enter. The mainstream robotics literature predominantly reports novel mechanisms and novel control algorithms. In contrast, this thesis proposes experimental methodologies for objectively evaluating the performance of both a robot and its remote human operator, when challenged with carrying out industrially relevant remote manipulation tasks. Initial experiments use a variety of metrics to evaluate the performance of human test-subjects. Results show that: conventional telemanipulation is extremely slow and difficult; metrics for usability of such technology can be conflicting and hard to interpret; aptitude for telemanipulation varies significantly between individuals; however such aptitude may be rendered predictable by using simple spatial awareness tests. Additional experiments suggest that autonomous robotics methods (e.g. vision-guided grasping) can significantly assist the operator. A novel approach to telemanipulation is proposed, in which an ``orbital camera`` enables the human operator to select arbitrary views of the scene, with the robot's motions transformed into the orbital view coordinate frame. This approach is useful for overcoming the severe depth perception problems of conventional fixed camera views. Finally, a novel computer vision algorithm is proposed for target tracking. Such an algorithm could be used to enable an unmanned aerial vehicle (UAV) to fixate on part of the workspace, e.g. a manipulated object, to provide the proposed orbital camera view

    Combining Differential Kinematics and Optical Flow for Automatic Labeling of Continuum Robots in Minimally Invasive Surgery

    Get PDF
    International audienceThe segmentation of continuum robots in medical images can be of interest for analyzing surgical procedures or for controlling them. However, the automatic segmentation of continuous and flexible shapes is not an easy task. On one hand conventional approaches are not adapted to the specificities of these instruments, such as imprecise kinematic models, and on the other hand techniques based on deep-learning showed interesting capabilities but need many manually labeled images. In this article we propose a novel approach for segmenting continuum robots on endoscopic images, which requires no prior on the instrument visual appearance and no manual annotation of images. The method relies on the use of the combination of kinematic models and differential kinematic models of the robot and the analysis of optical flow in the images. A cost function aggregating information from the acquired image, from optical flow and from robot encoders is optimized using particle swarm optimization and provides estimated parameters of the pose of the continuum instrument and a mask defining the instrument in the image. In addition a temporal consistency is assessed in order to improve stochastic optimization and reject outliers. The proposed approach has been tested for the robotic instruments of a flexible endoscopy platform both for benchtop acquisitions and an in vivo video. The results show the ability of the technique to correctly segment the instruments without a prior, and in challenging conditions. The obtained segmentation can be used for several applications, for instance for providing automatic labels for machine learning techniques

    Minimally invasive robotic surgery: force and torque analysis

    Get PDF
    La cirugía mínimamente invasiva y la incorporación de la robótica en este tipo de procedimientos representa grandes ventajas para el paciente, el cirujano y los sistemas de salud. Sin embargo, los dispositivos comerciales disponibles en la actualidad no cuentan con realimentación de fuerza y tacto, que faciliten al cirujano la identificación de los tejidos y consecuentemente, la reducción de errores en los procedimientos quirúrgicos; por lo cual, el desarrollo de sistemas que cuenten con este tipo de realimentación se convierte en un tema de interés a nivel mundial. El presente artículo contiene una revisión del estado de la técnica con respecto a los sistemas comerciales y experimentales desarrollados en esta área. También, se presentan algunos sensores y modelos matemáticos utilizados para calcular las fuerzas y torques en cirugía mínimamente invasiva.Minimally Invasive Surgery and the adaptation of robotics to these procedures represent many advantages for the patient, the surgeon, and the health program. However, commercial devices used nowadays lack haptic feedback. This fact makes the tissue identification more difficult and increments the injuries risk during the surgical procedure. The development of systems with this kind of feedback has become a topic of interest throughout the world. The present article contains a revision of the state of the art about commercial and experimental systems developed in this area. Models for the force and torque propagation, used in Minimally Invasive Surgery, are also presented
    corecore