668 research outputs found

    Shared-Control Teleoperation Paradigms on a Soft Growing Robot Manipulator

    Full text link
    Semi-autonomous telerobotic systems allow both humans and robots to exploit their strengths, while enabling personalized execution of a task. However, for new soft robots with degrees of freedom dissimilar to those of human operators, it is unknown how the control of a task should be divided between the human and robot. This work presents a set of interaction paradigms between a human and a soft growing robot manipulator, and demonstrates them in both real and simulated scenarios. The robot can grow and retract by eversion and inversion of its tubular body, a property we exploit to implement interaction paradigms. We implemented and tested six different paradigms of human-robot interaction, beginning with full teleoperation and gradually adding automation to various aspects of the task execution. All paradigms were demonstrated by two expert and two naive operators. Results show that humans and the soft robot manipulator can split control along degrees of freedom while acting simultaneously. In the simple pick-and-place task studied in this work, performance improves as the control is gradually given to the robot, because the robot can correct certain human errors. However, human engagement and enjoyment may be maximized when the task is at least partially shared. Finally, when the human operator is assisted by haptic feedback based on soft robot position errors, we observed that the improvement in performance is highly dependent on the expertise of the human operator.Comment: 15 pages, 14 figure

    Information-Driven Gas Distribution Mapping for Autonomous Mobile Robots.

    Get PDF
    The ability to sense airborne pollutants with mobile robots provides a valuable asset for domains such as industrial safety and environmental monitoring. Oftentimes, this involves detecting how certain gases are spread out in the environment, commonly referred to as a gas distribution map, to subsequently take actions that depend on the collected information. Since the majority of gas transducers require physical contact with the analyte to sense it, the generation of such a map usually involves slow and laborious data collection from all key locations. In this regard, this paper proposes an efficient exploration algorithm for 2D gas distribution mapping with an autonomous mobile robot. Our proposal combines a Gaussian Markov random field estimator based on gas and wind flow measurements, devised for very sparse sample sizes and indoor environments, with a partially observable Markov decision process to close the robot’s control loop. The advantage of this approach is that the gas map is not only continuously updated, but can also be leveraged to choose the next location based on how much information it provides. The exploration consequently adapts to how the gas is distributed during run time, leading to an efficient sampling path and, in turn, a complete gas map with a relatively low number of measurements. Furthermore, it also accounts for wind currents in the environment, which improves the reliability of the final gas map even in the presence of obstacles or when the gas distribution diverges from an ideal gas plume. Finally, we report various simulation experiments to evaluate our proposal against a computer-generated fluid dynamics ground truth, as well as physical experiments in a wind tunnel.Partial funding for open access charge: Universidad de Málag

    Limited Information Shared Control and its Applications to Large Vehicle Manipulators

    Get PDF
    Diese Dissertation beschĂ€ftigt sich mit der kooperativen Regelung einer mobilen Arbeitsmaschine, welche aus einem Nutzfahrzeug und einem oder mehreren hydraulischen Manipulatoren besteht. Solche Maschinen werden fĂŒr Aufgaben in der Straßenunterhaltungsaufgaben eingesetzt. Die Arbeitsumgebung des Manipulators ist unstrukturiert, was die Bestimmung einer Referenztrajektorie erschwert oder unmöglich macht. Deshalb wird in dieser Arbeit ein Ansatz vorgeschlagen, welcher nur das Fahrzeug automatisiert, wĂ€hrend der menschliche Bediener ein Teil des Systems bleibt und den Manipulator steuert. Eine solche Teilautomatisierung des Gesamtsystems fĂŒhrt zu einer speziellen Klasse von Mensch-Maschine-Interaktionen, welche in der Literatur noch nicht untersucht wurde: Eine kooperative Regelung zwischen zwei Teilsystemen, bei der die Automatisierung keine Informationen von dem vom Menschen gesteuerten Teilsystem hat. Deswegen wird in dieser Arbeit ein systematischer Ansatz der kooperativen Regelung mit begrenzter Information vorgestellt, der den menschlichen Bediener unterstĂŒtzen kann, ohne die Referenzen oder die SystemzustĂ€nde des Manipulators zu messen. Außerdem wird ein systematisches Entwurfskonzept fĂŒr die kooperative Regelung mit begrenzter Information vorgestellt. FĂŒr diese Entwurfsmethode werden zwei neue Unterklassen der sogenannten Potenzialspiele eingefĂŒhrt, die eine systematische Berechnung der Parameter der entwickelten kooperativen Regelung ohne manuelle Abstimmung ermöglichen. Schließlich wird das entwickelte Konzept der kooperativen Regelung am Beispiel einer großen mobilen Arbeitsmaschine angewandt, um seine Vorteile zu ermitteln und zu bewerten. Nach der Analyse in Simulationen wird die praktische Anwendbarkeit der Methode in drei Experimenten mit menschlichen Probanden an einem Simulator untersucht. Die Ergebnisse zeigen die Überlegenheit des entwickelten kooperativen Regelungskonzepts gegenĂŒber der manuellen Steuerung und der nicht-kooperativen Steuerung hinsichtlich sowohl der objektiven Performanz als auch der subjektiven Bewertung der Probanden. Somit zeigt diese Dissertation, dass die kooperative Regelung mobiler Arbeitsmaschinen mit den entwickelten theoretischen Konzepten sowohl hilfreich als auch praktisch anwendbar ist

    Simulation Builder, Analysis, and Development (SimBAD) Toolkit for Human Spaceflight Operation Training Using the Spacecraft Simulation Platform

    Get PDF
    As the scope of human spaceflight continues to expand, the Human Systems Integration (HSI) developed to support complex missions must be robust and efficient. This risk has been outlined in the Human Research Roadmap (HRR) as the “Risk of Adverse Outcomes Due to Inadequate Human Systems Integration Architecture” [1], short name HSIA. One of the most critical elements of any human spaceflight mission is training, which prepares flight operations teams with the resources necessary to carry out that mission. As more distant destinations such as the Moon or Mars are targeted for human spaceflight, ensuring crew have the tools they need to overcome new types of challenges will be a significant focus when developing new training infrastructures. With the nature of such missions, there are several knowledge gaps associated with HSIA that motivate investigating how training should be carried out on such missions. This research focuses on studying these gaps and using the findings to create a conceptual demonstration for a tool that can be used to assist in the training infrastructure that supports future spaceflight missions. This tool is called the Simulation Builder, Analysis, and Development (SimBAD) tool, which is a User Interface (UI) that utilizes the Space Collaborative Real-time Analysis and Flight Toolkit to build virtual training environments. There are four main objectives that incentivize the development of this tool, the improved collaboration between groups in the flight operations team, a training framework that is capable of being packaged on board a spacecraft, a framework that accounts for dynamic mission parameters, and a heightened level of autonomy for crew on missions. These objectives have been driven by the findings from an examination of current spaceflight training methods, previous research on training for future missions, and elements of the HSIA risk that pertain to training. The SimBADtool was designed with features that were motivated by these objectives to effectively create a virtual training facility. These features allow the user to control the environments, systems, procedures, events, and evaluations that are constructed together inside a virtual simulation. Giving this control to users as well as access to the environment through Virtual Reality (VR) is the overall method through which this thesis argues the objectives of the concept are met. These objectives are determined to be met and results for analysis are created through a demonstration of the concept. For this research the demonstration is done through several scenarios that are constructed in simulations using the SimBAD tool. The first is a simulation of IntraVehicular Activities (IVA) procedures being executed on board the International Space Station (ISS) which demonstrates the tool is able to account for dynamic mission parameters; the second is a simulation of two users inside a Mars habitat performing a comms check procedure that demonstrates capability for improved collaboration between groups on the flight operations team. The UI and VR platform demonstrate the tool is capable of packaging on board a spacecraft as well as increasing the autonomy the crew has during their mission. The elements of SimBAD establish a closed-loop infrastructure as a virtual training facility that offers improvements towards human spaceflight in HSI, particularly for future exploration missions by offering functionality towards the construction of simulated scenarios with procedure capability, dynamic event scripting, and simulation evaluation

    Exploring Robot Teleoperation in Virtual Reality

    Get PDF
    This thesis presents research on VR-based robot teleoperation with a focus on remote environment visualisation in virtual reality, the effects of remote environment reconstruction scale in virtual reality on the human-operator's ability to control the robot and human-operator's visual attention patterns when teleoperating a robot from virtual reality. A VR-based robot teleoperation framework was developed, it is compatible with various robotic systems and cameras, allowing for teleoperation and supervised control with any ROS-compatible robot and visualisation of the environment through any ROS-compatible RGB and RGBD cameras. The framework includes mapping, segmentation, tactile exploration, and non-physically demanding VR interface navigation and controls through any Unity-compatible VR headset and controllers or haptic devices. Point clouds are a common way to visualise remote environments in 3D, but they often have distortions and occlusions, making it difficult to accurately represent objects' textures. This can lead to poor decision-making during teleoperation if objects are inaccurately represented in the VR reconstruction. A study using an end-effector-mounted RGBD camera with OctoMap mapping of the remote environment was conducted to explore the remote environment with fewer point cloud distortions and occlusions while using a relatively small bandwidth. Additionally, a tactile exploration study proposed a novel method for visually presenting information about objects' materials in the VR interface, to improve the operator's decision-making and address the challenges of point cloud visualisation. Two studies have been conducted to understand the effect of virtual world dynamic scaling on teleoperation flow. The first study investigated the use of rate mode control with constant and variable mapping of the operator's joystick position to the speed (rate) of the robot's end-effector, depending on the virtual world scale. The results showed that variable mapping allowed participants to teleoperate the robot more effectively but at the cost of increased perceived workload. The second study compared how operators used a virtual world scale in supervised control, comparing the virtual world scale of participants at the beginning and end of a 3-day experiment. The results showed that as operators got better at the task they as a group used a different virtual world scale, and participants' prior video gaming experience also affected the virtual world scale chosen by operators. Similarly, the human-operator's visual attention study has investigated how their visual attention changes as they become better at teleoperating a robot using the framework. The results revealed the most important objects in the VR reconstructed remote environment as indicated by operators' visual attention patterns as well as their visual priorities shifts as they got better at teleoperating the robot. The study also demonstrated that operators’ prior video gaming experience affects their ability to teleoperate the robot and their visual attention behaviours

    A Common Digital Twin Platform for Education, Training and Collaboration

    Get PDF
    The world is in transition driven by digitalization; industrial companies and educational institutions are adopting Industry 4.0 and Education 4.0 technologies enabled by digitalization. Furthermore, digitalization and the availability of smart devices and virtual environments have evolved to pro- duce a generation of digital natives. These digital natives whose smart devices have surrounded them since birth have developed a new way to process information; instead of reading literature and writing essays, the digital native generation uses search engines, discussion forums, and on- line video content to study and learn. The evolved learning process of the digital native generation challenges the educational and industrial sectors to create natural training, learning, and collaboration environments for digital natives. Digitalization provides the tools to overcome the aforementioned challenge; extended reality and digital twins enable high-level user interfaces that are natural for the digital natives and their interaction with physical devices. Simulated training and education environments enable a risk-free way of training safety aspects, programming, and controlling robots. To create a more realistic training environment, digital twins enable interfacing virtual and physical robots to train and learn on real devices utilizing the virtual environment. This thesis proposes a common digital twin platform for education, training, and collaboration. The proposed solution enables the teleoperation of physical robots from distant locations, enabling location and time-independent training and collaboration in robotics. In addition to teleoperation, the proposed platform supports social communication, video streaming, and resource sharing for efficient collaboration and education. The proposed solution enables research collaboration in robotics by allowing collaborators to utilize each other’s equipment independent of the distance between the physical locations. Sharing of resources saves time and travel costs. Social communication provides the possibility to exchange ideas and discuss research. The students and trainees can utilize the platform to learn new skills in robotic programming, controlling, and safety aspects. Cybersecurity is considered from the planning phase to the implementation phase. Only cybersecure methods, protocols, services, and components are used to implement the presented platform. Securing the low-level communication layer of the digital twins is essential to secure the safe teleoperation of the robots. Cybersecurity is the key enabler of the proposed platform, and after implementation, periodic vulnerability scans and updates enable maintaining cybersecurity. This thesis discusses solutions and methods for cyber securing an online digital twin platform. In conclusion, the thesis presents a common digital twin platform for education, training, and collaboration. The presented solution is cybersecure and accessible using mobile devices. The proposed platform, digital twin, and extended reality user interfaces contribute to the transitions to Education 4.0 and Industry 4.0

    Neural Dynamics of Delayed Feedback in Robot Teleoperation: Insights from fNIRS Analysis

    Full text link
    As robot teleoperation increasingly becomes integral in executing tasks in distant, hazardous, or inaccessible environments, the challenge of operational delays remains a significant obstacle. These delays are inherent in signal transmission and processing and can adversely affect the operators performance, particularly in tasks requiring precision and timeliness. While current research has made strides in mitigating these delays through advanced control strategies and training methods, a crucial gap persists in understanding the neurofunctional impacts of these delays and the efficacy of countermeasures from a cognitive perspective. Our study narrows this gap by leveraging functional Near-Infrared Spectroscopy (fNIRS) to examine the neurofunctional implications of simulated haptic feedback on cognitive activity and motor coordination under delayed conditions. In a human-subject experiment (N=41), we manipulated sensory feedback to observe its influences on various brain regions of interest (ROIs) response during teleoperation tasks. The fNIRS data provided a detailed assessment of cerebral activity, particularly in ROIs implicated in time perception and the execution of precise movements. Our results reveal that certain conditions, which provided immediate simulated haptic feedback, significantly optimized neural functions related to time perception and motor coordination, and improved motor performance. These findings provide empirical evidence about the neurofunctional basis of the enhanced motor performance with simulated synthetic force feedback in the presence of teleoperation delays.Comment: Submitted to Frontiers in Human Neuroscienc

    User trust here and now but not necessarily there and then - A Design Perspective on Appropriate Trust in Automated Vehicles (AVs)

    Get PDF
    Automation may carry out functions previously conducted only by humans. In the past, interaction with automation was primarily designed for, and used by, users with special training (pilots in aviation or operators in the process industry for example) but since automation has developed and matured, it has also become more available to users who have no additional training on automation such as users of automated vehicles (AVs). However, before we can reap the benefits of AV use, users must first trust the vehicles. According to earlier studies on trust in automation (TiA), user trust is a precondition for the use of automated systems not only because it is essential to user acceptance, but also because it is a prerequisite for a good user experience. Furthermore, that user trust is appropriate in relation to the actual performance of the AV, that is, user trust is calibrated to the capabilities and limitations of the AV. Otherwise, it may lead to misuse or disuse of the AV.\ua0\ua0\ua0\ua0 The issue of how to design for appropriate user trust was approached from a user-centred design perspective based on earlier TiA theories and was addressed in four user studies using mixed-method research designs. The four studies involved three types of AVs; an automated car, an automated public transport bus as well as an automated delivery bot for last-mile deliveries (LMD) of parcels. The users’ ranged from ordinary car drivers, bus drivers, public transport commuters and logistic personnel.\ua0\ua0\ua0\ua0 The findings show that user trust in the AVs was primarily affected by information relating to the performance of the AV. That is factors such as, how predictable, reliable and capable the AV was perceived to be conducting for instance a task, as well as how appropriate the behaviour of the AV was perceived to be for conducting the task and whether or not the user understood why the AV behaved as it did when conducting the task. Secondly, it was also found that contextual aspects influenced user trust in AVs. This primarily related to the users’ perception of risk for oneself and others as well as perceptions of task difficulty. That is, user trust was affected by the perception of risk for oneself but also by the possible risks the AV could impose on other e.g. road users. The perception of task difficulty influenced user trust in situations when a task was perceived as (too) easy, the user could not judge the trustworthiness of the AV or when the AV increased the task difficulty for the user thus adding to negative outcomes. Therefore, AV-related trust factors and contextual aspects are important to consider when designing for appropriate user trust in different types of AVs operating in different domains.\ua0\ua0\ua0\ua0 However, from a more in-depth cross-study analysis and consequent synthesis it was found that when designing for appropriate user trust the earlier mentioned factors and aspects should be considered but should not be the focus. They are effects, that is the user’s interpretation of information originating from the behaviour of the AV in a particular context which in turn are the consequence of the following design variables: (I) The Who i.e. the AV, (II) What the AV does, (III) by What Means the AV does something, (IV) When the AV does something, (V) Why the AV does something and(VI) Where the AV does something, as well as the interplay between them. Furthermore, it was found that user trust is affected by the interdependency between (II) What the AV does and (VI) Where the AV does something; this was always assessed together by the user in turn affecting user trust. From these findings a tentative Framework of Trust Analysis & Design was developed. The framework can be used as a ‘tool-for-thought’ and accounts for the activity conducted by the AV, the context as well as their interdependence that ultimately affect user trust
    • 

    corecore