374 research outputs found

    Exodex Adam—A Reconfigurable Dexterous Haptic User Interface for the Whole Hand

    Get PDF
    Applications for dexterous robot teleoperation and immersive virtual reality are growing. Haptic user input devices need to allow the user to intuitively command and seamlessly “feel” the environment they work in, whether virtual or a remote site through an avatar. We introduce the DLR Exodex Adam, a reconfigurable, dexterous, whole-hand haptic input device. The device comprises multiple modular, three degrees of freedom (3-DOF) robotic fingers, whose placement on the device can be adjusted to optimize manipulability for different user hand sizes. Additionally, the device is mounted on a 7-DOF robot arm to increase the user’s workspace. Exodex Adam uses a front-facing interface, with robotic fingers coupled to two of the user’s fingertips, the thumb, and two points on the palm. Including the palm, as opposed to only the fingertips as is common in existing devices, enables accurate tracking of the whole hand without additional sensors such as a data glove or motion capture. By providing “whole-hand” interaction with omnidirectional force-feedback at the attachment points, we enable the user to experience the environment with the complete hand instead of only the fingertips, thus realizing deeper immersion. Interaction using Exodex Adam can range from palpation of objects and surfaces to manipulation using both power and precision grasps, all while receiving haptic feedback. This article details the concept and design of the Exodex Adam, as well as use cases where it is deployed with different command modalities. These include mixed-media interaction in a virtual environment, gesture-based telemanipulation, and robotic hand–arm teleoperation using adaptive model-mediated teleoperation. Finally, we share the insights gained during our development process and use case deployments

    Design and modeling of a stair climber smart mobile robot (MSRox)

    Full text link

    Haptic feedback in teleoperation in Micro-and Nano-Worlds.

    No full text
    International audienceRobotic systems have been developed to handle very small objects, but their use remains complex and necessitates long-duration training. Simulators, such as molecular simulators, can provide access to large amounts of raw data, but only highly trained users can interpret the results of such systems. Haptic feedback in teleoperation, which provides force-feedback to an operator, appears to be a promising solution for interaction with such systems, as it allows intuitiveness and flexibility. However several issues arise while implementing teleoperation schemes at the micro-nanoscale, owing to complex force-fields that must be transmitted to users, and scaling differences between the haptic device and the manipulated objects. Major advances in such technology have been made in recent years. This chapter reviews the main systems in this area and highlights how some fundamental issues in teleoperation for micro- and nano-scale applications have been addressed. The chapter considers three types of teleoperation, including: (1) direct (manipulation of real objects); (2) virtual (use of simulators); and (3) augmented (combining real robotic systems and simulators). Remaining issues that must be addressed for further advances in teleoperation for micro-nanoworlds are also discussed, including: (1) comprehension of phenomena that dictate very small object (< 500 micrometers) behavior; and (2) design of intuitive 3-D manipulation systems. Design guidelines to realize an intuitive haptic feedback teleoperation system at the micro-nanoscale level are proposed

    광섬유 힘 센서가 내장된 로봇 원격 및 무인 조작을 위한 모듈화 로봇 스킨

    Get PDF
    학위논문(석사) -- 서울대학교대학원 : 공과대학 기계공학부, 2021.8. 박용래.Robots have been used to replace human workers for dangerous and difficult tasks that require human-like dexterity. To perform sophisticated tasks, force and tactile sensing is one of the key requirements to achieve dexterous manipulation. Robots equipped with sensitive skin that can play a role of mechanoreception in animals will be able to perform tasks with high levels of dexterity. In this research, we propose modularized robotic skin that is capable of not only localizing external contacts but also estimating the magnitudes of the contact forces. In order to acquire three pieces of key information on a contact, such as contact locations in horizontal and vertical directions and the magnitude of the force, each skin module requires three degrees of freedom in sensing. In the proposed skin, force sensing is achieved by a custom-designed triangular beam structure. A force applied to the outer surface of the skin module is transmitted to the beam structure underneath, and bending of the beam is detected by fiber optic strain sensors, called fiber Bragg gratings. The proposed skin shows resolutions of 1.45 N for force estimation and 1.85 mm and 1.91 mm for contact localization in horizontal and vertical directions, respectively. We also demonstrate applications of the proposed skin for remote and autonomous operations of commercial robotic arms equipped with an array of the skin modules.로봇은 인간과 같은 높은 조작성이 필요한 어려운 작업 환경이나 위험한 환경에서 인간을 대체할 수 있도록 연구되고 있다. 이를 위해 동물의 기계적 감응(mechanoreception) 역할과 같은 기능을 수행하면서 로봇에 부착될 수 있는 스킨을 연구하고 있고, 민감한 로봇 스킨이 부착된 로봇은 높은 수준의 조작성을 가지고 주어진 작업을 성공할 수 있다. 다시 말해 로봇의 힘 센싱과 촉각 센싱 기능은 정교한 로봇 조작의 핵심 요소들 중 하나로 로봇의 세밀한 작업들을 수행하기 필요로 하다. 따라서 우리는 이 연구에서 외부 접촉의 위치뿐만 아니라 외력의 크기도 추정할 수 있는 모듈화된 로봇 스킨을 제안한다. 접촉 힘의 크기, 접촉의 수직 및 수평 위치 등 접촉에 대한 3가지 정보를 얻기 위해서 각 스킨 모듈은 3 자유도를 가지도록 설계하였다. 제안한 스킨에서 힘 센싱은 새롭게 설계한 삼각형 형태의 빔 구조의 변형을 통해서 측정한다. 구체적으로 스킨 모듈의 외피에 가해진 힘은 빔 구조로 전달되고, 이로 인해 발생하는 빔 구조의 변형은 “fiber Bragg gratings” 이라고 불리는 광섬유 스트레인 센서들에 의해서 측정된다. 제안한 스킨은 1.45 N의 힘 추정 해상도를 가지고, 수평 및 수직 위치 추정은 각각 1.85 mm와 1.91 mm의 해상도를 가진다. 우리는 상용화된 로봇팔에 여러 개의 스킨 모듈을 배열 및 부착하여 로봇의 원격 조작 및 무인 조작을 실행하였고, 스킨의 활용성을 검증하였다.1. Introduction 1 2. Design 7 2.1. Skin Module . 2.2. Skin Array . 3. Modeling 12 3.1. FBG Sensing Principle and Temperature Compensation 25 3.2. Estimation of Beam Force and Deflection . 3.3. Estimation of Spring Force . 3.4. Estimation of Contact Locations and Force . 4. Experiments 25 4.1. Experimental Setup . 4.2. Initialization . 4.3. Parameter Optimization . 4.4. Result . 5. Application 32 5.1. Remote Robot Manipulation . 5.2. Autonomous Robot Control . 6. Discussion 46 7. Conclusion 48 8. Appendix 49 8.1. Beam Deflection . Bibliography 52 Abstract in Korean 60석

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    Design and Quantitative Assessment of Teleoperation-Based Human–Robot Collaboration Method for Robot-Assisted Sonography

    Get PDF
    Tele-echography has emerged as a promising and effective solution, leveraging the expertise of sonographers and the autonomy of robots to perform ultrasound scanning for patients residing in remote areas, without the need for in-person visits by the sonographer. Designing effective and natural human-robot interfaces for tele-echography remains challenging, with patient safety being a critical concern. In this article, we develop a teleoperation system for robot-assisted sonography with two different interfaces, a haptic device-based interface and a low-cost 3D Mouse-based interface, which can achieve continuous and intuitive telemanipulation by a leader device with a small workspace. To achieve compliant interaction with patients, we design impedance controllers in Cartesian space to track the desired position and orientation for these two teleoperation interfaces. We also propose comprehensive evaluation metrics of robot-assisted sonography, including subjective and objective evaluation, to evaluate tele-echography interfaces and control performance. We evaluate the ergonomic performance based on the estimated muscle fatigue and the acquired ultrasound image quality. We conduct user studies based on the NASA Task Load Index to evaluate the performance of these two human-robot interfaces. The tracking performance and the quantitative comparison of these two teleoperation interfaces are conducted by the Franka Emika Panda robot. The results and findings provide guidance on human-robot collaboration design and implementation for robot-assisted sonography. Note to Practitioners —Robot-assisted sonography has demonstrated efficacy in medical diagnosis during clinical trials. However, deploying fully autonomous robots for ultrasound scanning remains challenging due to various constraints in practice, such as patient safety, dynamic tasks, and environmental uncertainties. Semi-autonomous or teleoperation-based robot sonography represents a promising approach for practical deployment. Previous work has produced various expensive teleoperation interfaces but lacks user studies to guide teleoperation interface selection. In this article, we present two typical teleoperation interfaces and implement a continuous and intuitive teleoperation control system. We also propose a comprehensive evaluation metric for assessing their performance. Our findings show that the haptic device outperforms the 3D Mouse, based on operators’ feedback and acquired image quality. However, the haptic device requires more learning time and effort in the training stage. Furthermore, the developed teleoperation system offers a solution for shared control and human-robot skill transfer. Our results provide valuable guidance for designing and implementing human-robot interfaces for robot-assisted sonography in practice

    Shared-Control Teleoperation Paradigms on a Soft Growing Robot Manipulator

    Full text link
    Semi-autonomous telerobotic systems allow both humans and robots to exploit their strengths, while enabling personalized execution of a task. However, for new soft robots with degrees of freedom dissimilar to those of human operators, it is unknown how the control of a task should be divided between the human and robot. This work presents a set of interaction paradigms between a human and a soft growing robot manipulator, and demonstrates them in both real and simulated scenarios. The robot can grow and retract by eversion and inversion of its tubular body, a property we exploit to implement interaction paradigms. We implemented and tested six different paradigms of human-robot interaction, beginning with full teleoperation and gradually adding automation to various aspects of the task execution. All paradigms were demonstrated by two expert and two naive operators. Results show that humans and the soft robot manipulator can split control along degrees of freedom while acting simultaneously. In the simple pick-and-place task studied in this work, performance improves as the control is gradually given to the robot, because the robot can correct certain human errors. However, human engagement and enjoyment may be maximized when the task is at least partially shared. Finally, when the human operator is assisted by haptic feedback based on soft robot position errors, we observed that the improvement in performance is highly dependent on the expertise of the human operator.Comment: 15 pages, 14 figure

    A Cognitive Robot Control Architecture for Autonomous Execution of Surgical Tasks

    Get PDF
    The research on medical robotics is starting to address the autonomous execution of surgical tasks, without effective intervention of humans apart from supervision and task configuration. This paper addresses the complete automation of a surgical robot by combining advanced sensing, cognition and control capabilities, developed according to rigorous assessment of surgical require- ments, formal specification of robotic system behavior and software design and implementation based on solid tools and frame- works. In particular, the paper focuses on the cognitive control architecture and its development process, based on formal modeling and verification methods as best practices to ensure safe and reliable behavior. Full implementation of the proposed architecture has been tested on an experimental setup including a novel robot specifically designed for surgical applications, but adaptable to different selected tasks (i.e. needle insertion, wound suturing)

    Importance and applications of robotic and autonomous systems (RAS) in railway maintenance sector: a review

    Get PDF
    Maintenance, which is critical for safe, reliable, quality, and cost-effective service, plays a dominant role in the railway industry. Therefore, this paper examines the importance and applications of Robotic and Autonomous Systems (RAS) in railway maintenance. More than 70 research publications, which are either in practice or under investigation describing RAS developments in the railway maintenance, are analysed. It has been found that the majority of RAS developed are for rolling-stock maintenance, followed by railway track maintenance. Further, it has been found that there is growing interest and demand for robotics and autonomous systems in the railway maintenance sector, which is largely due to the increased competition, rapid expansion and ever-increasing expense
    corecore