61 research outputs found

    Measuring the impact of haptic feedback in collaborative robotic scenarios

    Get PDF
    [EN] In recent years, the interaction of a human operator with teleoperated robotic systems has been much improved. One of the factors influencing this improvement is the addition of force feedback to complement the visual feedback provided by traditional graphical user interfaces. However, the users of these systems performing tasks in isolated and safe environments are often inexperienced and occasional users. In addition, there is no common framework to assess the usability of these systems, due to the heterogeneity of applications and tasks, and therefore, there is a need for new usability assessment methods that are not domain specific. This study addresses this issue by proposing a measure of usability that includes five variables: user efficiency, user effectiveness, mental workload, perceived usefulness, and perceived ease of use. The empirical analysis shows that the integration of haptic feedback improves the usability of these systems for non-expert users, even though the differences are not statistically significant; further, the results suggest that mental workload is higher when haptic feedback is added. The analysis also reveals significant differences between participants depending on gender.SIPublicación en abierto financiada por el Consorcio de Bibliotecas Universitarias de Castilla y León (BUCLE), con cargo al Programa Operativo 2014ES16RFOP009 FEDER 2014-2020 DE CASTILLA Y LEÓN, Actuación:20007-CL - Apoyo Consorcio BUCL

    Tele-operation and Human Robots Interactions

    Get PDF

    An augmented reality interface for multi-robot tele-operation and control

    Get PDF
    This thesis presents a seamlessly controlled human multi-robot system comprised of ground and aerial robots of semi-autonomous nature for source localization tasks. The system combines augmented reality interfaces capabilities with human supervisor\u27s ability to control multiple robots. It used advanced path planning algorithms to ensure obstacles are avoided and that the operators are free for higher-level tasks. A sensor data fused AR view is displayed which helped the users pin point source information or help the operator with the goals of the mission. The paper studies a preliminary Human Factors evaluation of this system in which several interface conditions are tested for source detection tasks

    Hand-Gesture Based Programming of Industrial Robot Manipulators

    Get PDF
    Nowadays, industrial robot manipulators and manufacturing processes are associated as never before. Robot manipulators execute repetitive tasks with increased accuracy and speed, features necessary for industries with needs for manufacturing of products in large quantities by reducing the production time. Although robot manipulators have a significant role for the enhancement of productivity within industries, the programming process of the robot manipulators is an important drawback. Traditional programming methodologies requires robot programming experts and are time consuming. This thesis work aims to develop an application for programming industrial robot manipulators excluding the need of traditional programing methodologies exploiting the intuitiveness of humans’ hands’ gestures. The development of input devices for intuitive Human-Machine Interactions provides the possibility to capture such gestures. Hence, the need of the need of robot manipulator programming experts can be replaced by task experts. In addition, the integration of intuitive means of interaction can reduce be also reduced. The components to capture the hands’ operators’ gestures are a data glove and a precise hand-tracking device. The robot manipulator imitates the motion that human operator performs with the hand, in terms of position. Inverse kinematics are applied to enhance the programming of robot manipulators in-dependently of their structure and manufacturer and researching the possibility for optimizing the programmed robot paths. Finally, a Human-Machine Interface contributes in the programming process by offering important information for the programming process and the status of the integrated components

    Development and evaluation of a collision avoidance system for supervisory control of a micro aerial vehicle

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 195-108).Recent technological advances have enabled Unmanned Aerial Vehicles (UAVs) and Micro Aerial Vehicles (MAVs) to become increasingly prevalent in a variety of domains. From military surveillance to disaster relief to search-and-rescue tasks, these systems have the capacity to assist in difficult or dangerous tasks and to potentially save lives. To enable operation by minimally trained personnel, the control interfaces require increased usability in order to maintain safety and mission effectiveness. In particular, as these systems are used in the real world, the operator must be able to navigate around obstacles in unknown and unstructured environments. In order to address this problem, the Collision and Obstacle Detection and Alerting (CODA) display was designed and integrated into a smartphone-based MAV control interface. The CODA display uses a combination of visual and haptic alerts to warn the operator of potential obstacles in the environment to help the operator navigate more effectively and avoid collisions. To assess the usability of this system, a within-subjects experiment was conducted in which participants used the mobile interface to pilot a MAV both with and without the assistance of the CODA display. The task consisted of navigating though a simulated indoor environment and locating visual targets. Metrics for the two conditions examined performance, control strategies, and subjective feedback from each participant. Overall, the addition of the CODA display resulted in higher performance, lowering the crash rate and decreasing the amount of time required to complete the tasks. Despite increasing the complexity of the interface, adding the CODA display did not significantly impact usability, and participants preferred operating the MAV with the CODA display. These results demonstrate that the CODA display provides the basis for an effective alerting tool to assist with MAV operation for exploring unknown environments. Future work should explore expansion to three-dimensional sensing and alerting capabilities as well as validation in an outdoor environment.by Kimberly F. Jackson.S.M

    DSAAR: distributed software architecture for autonomous robots

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Engenharia ElectrotécnicaThis dissertation presents a software architecture called the Distributed Software Architecture for Autonomous Robots (DSAAR), which is designed to provide the fast development and prototyping of multi-robot systems. The DSAAR building blocks allow engineers to focus on the behavioural model of robots and collectives. This architecture is of special interest in domains where several human, robot, and software agents have to interact continuously. Thus, fast prototyping and reusability is a must. DSAAR tries to cope with these requirements towards an advanced solution to the n-humans and m-robots problem with a set of design good practices and development tools. This dissertation will also focus on Human-Robot Interaction, mainly on the subject of teleoperation. In teleoperation human judgement is an integral part of the process, heavily influenced by the telemetry data received from the remote environment. So the speed in which commands are given and the telemetry data is received, is of crucial importance. Using the DSAAR architecture a teleoperation approach is proposed. This approach was designed to provide all entities present in the network a shared reality, where every entity is an information source in an approach similar to the distributed blackboard. This solution was designed to accomplish a real time response, as well as, the completest perception of the robots’ surroundings. Experimental results obtained with the physical robot suggest that the system is able to guarantee a close interaction between users and robot

    Adaptive Shared Autonomy between Human and Robot to Assist Mobile Robot Teleoperation

    Get PDF
    Die Teleoperation vom mobilen Roboter wird in großem Umfang eingesetzt, wenn es für Mensch unpraktisch oder undurchführbar ist, anwesend zu sein, aber die Entscheidung von Mensch wird dennoch verlangt. Es ist für Mensch stressig und fehleranfällig wegen Zeitverzögerung und Abwesenheit des Situationsbewusstseins, ohne Unterstützung den Roboter zu steuern einerseits, andererseits kann der völlig autonome Roboter, trotz jüngsten Errungenschaften, noch keine Aufgabe basiert auf die aktuellen Modelle der Wahrnehmung und Steuerung unabhängig ausführen. Deswegen müssen beide der Mensch und der Roboter in der Regelschleife bleiben, um gleichzeitig Intelligenz zur Durchführung von Aufgaben beizutragen. Das bedeut, dass der Mensch die Autonomie mit dem Roboter während des Betriebes zusammenhaben sollte. Allerdings besteht die Herausforderung darin, die beiden Quellen der Intelligenz vom Mensch und dem Roboter am besten zu koordinieren, um eine sichere und effiziente Aufgabenausführung in der Fernbedienung zu gewährleisten. Daher wird in dieser Arbeit eine neuartige Strategie vorgeschlagen. Sie modelliert die Benutzerabsicht als eine kontextuelle Aufgabe, um eine Aktionsprimitive zu vervollständigen, und stellt dem Bediener eine angemessene Bewegungshilfe bei der Erkennung der Aufgabe zur Verfügung. Auf diese Weise bewältigt der Roboter intelligent mit den laufenden Aufgaben auf der Grundlage der kontextuellen Informationen, entlastet die Arbeitsbelastung des Bedieners und verbessert die Aufgabenleistung. Um diese Strategie umzusetzen und die Unsicherheiten bei der Erfassung und Verarbeitung von Umgebungsinformationen und Benutzereingaben (i.e. der Kontextinformationen) zu berücksichtigen, wird ein probabilistischer Rahmen von Shared Autonomy eingeführt, um die kontextuelle Aufgabe mit Unsicherheitsmessungen zu erkennen, die der Bediener mit dem Roboter durchführt, und dem Bediener die angemesse Unterstützung der Aufgabenausführung nach diesen Messungen anzubieten. Da die Weise, wie der Bediener eine Aufgabe ausführt, implizit ist, ist es nicht trivial, das Bewegungsmuster der Aufgabenausführung manuell zu modellieren, so dass eine Reihe von der datengesteuerten Ansätzen verwendet wird, um das Muster der verschiedenen Aufgabenausführungen von menschlichen Demonstrationen abzuleiten, sich an die Bedürfnisse des Bedieners in einer intuitiven Weise über lange Zeit anzupassen. Die Praxistauglichkeit und Skalierbarkeit der vorgeschlagenen Ansätze wird durch umfangreiche Experimente sowohl in der Simulation als auch auf dem realen Roboter demonstriert. Mit den vorgeschlagenen Ansätzen kann der Bediener aktiv und angemessen unterstützt werden, indem die Kognitionsfähigkeit und Autonomieflexibilität des Roboters zu erhöhen

    Visuo-haptic Command Interface for Control-Architecture Adaptable Teleoperation

    Get PDF
    Robotic teleoperation is the commanding of a remote robot. Depending on the operator's involvement required by a teleoperation task, the remote site is more or less autonomous. On the operator site, input and display devices record and present control-related information from and to the operator respectively. Kinaesthetic devices stimulate haptic senses, thus conveying information through the sensing of displacement, velocity and acceleration within muscles, tendons and joints. These devices have shown to excel in tasks with low autonomy while touch-screen based devices are beneficial in highly autonomous tasks. However, neither perform reliably over a broad range. This thesis examines the feasibility of the 'Motion Console Application for Novel Virtual, Augmented and Avatar Systems' (Motion CANVAAS) that unifies the input/display capabilities of kinaesthetic and visual touchscreen-based devices in order to bridge this gap. This work describes the design of the Motion CANVAAS, its construction, development and conducts an initial validation process. The Motion CANVAAS was evaluated via two pilot studies, each based on a different virtual environment: a modified Tetris application and a racing karts simulator. The target research variables were the coupling of input/display capabilities and the effect of the application-specific kinaesthetic feedback. Both studies proved the concept to be a viable solution as haptic input/output device and indicated potential advantages over current solutions. On the flip side, some of the system's limitations could be identified. With the insight gained from this work, the benefits as well as the limitations will be addressed in the future research. Additionally, a full user study will be conducted to shed light on the capabilities and performance of the device in teleoperation over a broad range of autonomy

    Virtual reality aided vehicle teleoperation

    Get PDF
    This thesis describes a novel approach to vehicle teleoperation. Vehicle teleoperation is the human mediated control of a vehicle from a remote location. Typical methods for providing updates of the world around the vehicle use vehicle mounted video cameras. This methodology suffers from two problems: lag and limited field of view. Lag is the amount of time it takes for a signal to travel from the operator\u27s location to the vehicle. This lag causes the images from the camera and commands from the operator to be delayed. This behavior is a serious problem when the vehicle is approaching an obstacle. If the delay is long enough, the vehicle might crash into an obstacle before the operator knows that it is there. To complicate matters, most cameras provide only a small arc of visibility around the vehicle that leaves a significant blind spot. Therefore, hazards close to the vehicle might not be visible to the operator, such as a rock behind and to the left of the vehicle. In that case, if the vehicle were maneuvered sharply to the left, it might impact the rock. Virtual reality has been used to attack these two problems. A simulation of the vehicle is used to predict its positional response to inputs. This response is then displayed in a virtual world that mimics the operational environment. A dynamics algorithm called the wagon tongue method is used by a computer at the remote site to correct for inaccuracies between the simulated vehicle position and the actual vehicle position. The wagon tongue method eliminates the effect of the average lag value. Synchronization code is used to ensure that the vehicle executes commands with the same amount of time between them as when the operator issued them. This system behavior eliminates the effects of lag variation. The problem of limited field of view is solved by using a virtual camera viewpoint behind the vehicle that displays the entire world around the vehicle. This thesis develops and compares a system using virtual reality aided teleoperation with direct control and vehicle mounted camera aided teleoperation
    corecore