51 research outputs found

    Dynamic virtual reality user interface for teleoperation of heterogeneous robot teams

    Full text link
    This research investigates the possibility to improve current teleoperation control for heterogeneous robot teams using modern Human-Computer Interaction (HCI) techniques such as Virtual Reality. It proposes a dynamic teleoperation Virtual Reality User Interface (VRUI) framework to improve the current approach to teleoperating heterogeneous robot teams

    Velocity control of mini-UAV using a helmet system

    No full text
    International audienceThe usage of a helmet to command a mini-unmanned aerial vehicle (mini-UAV), is a telepresence system that connects the operator to the vehicle. This paper proposes a system which remotely allows the connection of a pilot's head motion and the 3D movements of a mini-UAVs. Two velocity control algorithms have been tested in order to manipulate the system. Results demonstrate that these movements can be used as reference inputs of the controller of the mini-UAV

    Development and human performance evaluation of a ground vehicle robotic force-feedback tele-operation system

    Get PDF
    ABSTRACT DEVELOPMENT AND HUMAN PERFORMANCE EVALUATION OF A GROUND VEHICLE ROBOTIC FORCE-FEEDBACK TELE-OPERATION SYSTEM by ANKUR SARAF May 2011 Advisor: Dr. Abhilash K. Pandya. Major: Electrical Engineering. Degree: Master of Science. Modern tele-operation systems are trying to take into account haptic and audio information in addition to visual data to provide as feedback to the tele-operator.This research emphasizes on the development of hardware and software architecture to enhance the tele-operation capabilities of omni-directional inspection robot (ODIS). The system enhances the tele-operation capabilities by introducing force-feedback to the tele-operators. The conventional joystick is replaced with Novint Falcon haptic joystick which gets the feedback from the wireless accelerometer sensor module mounted on the top of ODIS. The wireless accelerometer sensor module uses XBee modules for sending the acceleration data to the server. The server in-turn is connected to the joystick which is used to direct the ODIS. The advantage of the wireless accelerometer system is it can be used not only with ODIS but with any other unmanned vehicle as well. Though this research uses ODIS robot as the platform, the ideas and concepts put forward are applicable to tele-operation of robots in general

    Haptic Feedback Effects on Human Control of a UAV in a Remote Teleoperation Flight Task

    Get PDF
    The remote manual teleoperation of an unmanned aerial vehicle (UAV) by a human operator creates a human-in-the loop system that is of great concern. In a remote teleoperation task, a human pilot must make control decisions based upon sensory information provided by the governed system. Often, this information consists of limited visual feedback provided by onboard cameras that do not provide an operator with an accurate portrayal of their immediate surroundings compromising the safety of the mobile robot. Due to this shortfall, haptic force feedback is often provided to the human in an effort to increase their perceptual awareness of the surrounding world. To investigate the effects of this additional sensory information provided to the human op-erator, we consider two haptic force feedback strategies. They were designed to provide either an attractive force to influence control behavior towards a reference trajectory along a flight path, or a repulsive force directing operators away from obstacles to prevent collision. Subject tests were con-ducted where human operators manually operated a remote UAV through a corridor environment under the conditions of the two strategies. For comparison, the conditions of no haptic feedback and the liner combination of both attractive and repulsive strategies were included in the study. Experi-mental results dictate that haptic force feedback in general (including both attractive and repulsive force feedback) improves the average distance from surrounding obstacles up to 21%. Further statis-tical comparison of repulsive and attractive feedback modalities reveal that even though a repulsive strategy is based directly on obstacles, an attractive strategy towards a reference trajectory is more suitable across all performance metrics. To further examine the effects of haptic aides in a UAV teleoperation task, the behavior of the human system as part of the control loop was also investigated. Through a novel device placed on the end effector of the haptic device, human-haptic interaction forces were captured and further analyzed. With this information, system identification techniques were carried out to determine the plausibility of deriving a human control model for the system. By defining lateral motion as a one-dimensional compensatory tracking task the results show that general human control behavior can be identified where lead compensation in invoked to counteract second-order UAV dynamics

    On the use of haptic tablets for UGV teleoperation in unstructured environments: system design and evaluation

    Get PDF
    Teleoperation of Unmanned Ground Vehicles (UGVs), particularly for inspection of unstructured and unfamiliar environments still raises important challenges from the point of view of the operator interface. One of these challenges is caused by the fact that all information available to the operator is presented to the operator through a computer interface, providing only a partial view of the robot situation. The majority of existing interfaces provide information using visual, and, less frequently, sound channels. The lack of Situation Awareness (SA), caused by this partial view, may lead to an incorrect and inefficient response to the current UGV state, usually confusing and frustrating the human operator. For instance, the UGV may become stuck in debris while the operator struggles to move the robot, not understanding the cause of the UGV lack of motion. We address this problem by studying the use of haptic feedback to improve operator SA. More precisely, improving SA with respect to the traction state of the UGV, using a haptic tablet for both commanding the robot and conveying traction state to the user by haptic feedback. We report (1) a teleoperating interface, integrating a haptic tablet with an existing UGV teleoperation interface, and (2) the experimental results of a user study designed to evaluate the advantage of this interface in the teleoperation of a UGV, in a search and rescue scenario. Statistically significant results were found supporting the hypothesis that using the haptic tablet elicits a reduction in the time that the UGV spends in states without traction.info:eu-repo/semantics/publishedVersio

    Human-robot interaction for telemanipulation by small unmanned aerial systems

    Get PDF
    This dissertation investigated the human-robot interaction (HRI) for the Mission Specialist role in a telemanipulating unmanned aerial system (UAS). The emergence of commercial unmanned aerial vehicle (UAV) platforms transformed the civil and environmental engineering industries through applications such as surveying, remote infrastructure inspection, and construction monitoring, which normally use UAVs for visual inspection only. Recent developments, however, suggest that performing physical interactions in dynamic environments will be important tasks for future UAS, particularly in applications such as environmental sampling and infrastructure testing. In all domains, the availability of a Mission Specialist to monitor the interaction and intervene when necessary is essential for successful deployments. Additionally, manual operation is the default mode for safety reasons; therefore, understanding Mission Specialist HRI is important for all small telemanipulating UAS in civil engineering, regardless of system autonomy and application. A 5 subject exploratory study and a 36 subject experimental study were conducted to evaluate variations of a dedicated, mobile Mission Specialist interface for aerial telemanipulation from a small UAV. The Shared Roles Model was used to model the UAS human-robot team, and the Mission Specialist and Pilot roles were informed by the current state of practice for manipulating UAVs. Three interface camera view designs were tested using a within-subjects design, which included an egocentric view (perspective from the manipulator), exocentric view (perspective from the UAV), and mixed egocentric-exocentric view. The experimental trials required Mission Specialist participants to complete a series of tasks with physical, visual, and verbal requirements. Results from these studies found that subjects who preferred the exocentric condition performed tasks 50% faster when using their preferred interface; however, interface preferences did not affect performance for participants who preferred the mixed condition. This result led to a second finding that participants who preferred the exocentric condition were distracted by the egocentric view during the mixed condition, likely caused by cognitive tunneling, and the data suggest tradeoffs between performance improvements and attentional costs when adding information in the form of multiple views to the Mission Specialist interface. Additionally, based on this empirical evaluation of multiple camera views, the exocentric view was recommended for use in a dedicated Mission Specialist telemanipulation interface. Contributions of this thesis include: i) conducting the first focused HRI study of aerial telemanipulation, ii) development of an evaluative model for telemanipulation performance, iii) creation of new recommendations for aerial telemanipulation interfacing, and iv) contribution of code, hardware designs, and system architectures to the open-source UAV community. The evaluative model provides a detailed framework, a complement to the abstraction of the Shared Roles Model, that can be used to measure the effects of changes in the system, environment, operators, and interfacing factors on performance. The practical contributions of this work will expedite the use of manipulating UAV technologies by scientists, researchers, and stakeholders, particularly those in civil engineering, who will directly benefit from improved manipulating UAV performance

    Towards Tactile Internet in Beyond 5G Era: Recent Advances, Current Issues and Future Directions

    Get PDF
    Tactile Internet (TI) is envisioned to create a paradigm shift from the content-oriented communications to steer/control-based communications by enabling real-time transmission of haptic information (i.e., touch, actuation, motion, vibration, surface texture) over Internet in addition to the conventional audiovisual and data traffics. This emerging TI technology, also considered as the next evolution phase of Internet of Things (IoT), is expected to create numerous opportunities for technology markets in a wide variety of applications ranging from teleoperation systems and Augmented/Virtual Reality (AR/VR) to automotive safety and eHealthcare towards addressing the complex problems of human society. However, the realization of TI over wireless media in the upcoming Fifth Generation (5G) and beyond networks creates various non-conventional communication challenges and stringent requirements in terms of ultra-low latency, ultra-high reliability, high data-rate connectivity, resource allocation, multiple access and quality-latency-rate tradeoff. To this end, this paper aims to provide a holistic view on wireless TI along with a thorough review of the existing state-of-the-art, to identify and analyze the involved technical issues, to highlight potential solutions and to propose future research directions. First, starting with the vision of TI and recent advances and a review of related survey/overview articles, we present a generalized framework for wireless TI in the Beyond 5G Era including a TI architecture, the main technical requirements, the key application areas and potential enabling technologies. Subsequently, we provide a comprehensive review of the existing TI works by broadly categorizing them into three main paradigms; namely, haptic communications, wireless AR/VR, and autonomous, intelligent and cooperative mobility systems. Next, potential enabling technologies across physical/Medium Access Control (MAC) and network layers are identified and discussed in detail. Also, security and privacy issues of TI applications are discussed along with some promising enablers. Finally, we present some open research challenges and recommend promising future research directions

    Mixed-reality for unmanned aerial vehicle operations in near earth environments

    Get PDF
    Future applications will bring unmanned aerial vehicles (UAVs) to near Earth environments such as urban areas, causing a change in the way UAVs are currently operated. Of concern is that UAV accidents still occur at a much higher rate than the accident rate for commercial airliners. A number of these accidents can be attributed to a UAV pilot's low situation awareness (SA) due to the limitations of UAV operating interfaces. The main limitation is the physical separation between the vehicle and the pilot. This eliminates any motion and exteroceptive sensory feedback to the pilot. These limitation on top of a small eld of view from the onboard camera results in low SA, making near Earth operations di cult and dangerous. Autonomy has been proposed as a solution for near Earth tasks but state of the art arti cial intelligence still requires very structured and well de ned goals to allow safe autonomous operations. Therefore, there is a need to better train pilots to operate UAVs in near Earth environments and to augment their performance for increased safety and minimization of accidents.In this work, simulation software, motion platform technology, and UAV sensor suites were integrated to produce mixed-reality systems that address current limitations of UAV piloting interfaces. The mixed reality de nition is extended in this work to encompass not only the visual aspects but to also include a motion aspect. A training and evaluation system for UAV operations in near Earth environments was developed. Modi cations were made to ight simulator software to recreate current UAV operating modalities (internal and external). The training and evaluation system has been combined with Drexel's Sensor Integrated Systems Test Rig (SISTR) to allow simulated missions while incorporating real world environmental e ects andUAV sensor hardware.To address the lack of motion feedback to a UAV pilot, a system was developed that integrates a motion simulator into UAV operations. The system is designed such that during ight, the angular rate of a UAV is captured by an onboard inertial measurement unit (IMU) and is relayed to a pilot controlling the vehicle from inside the motion simulator.Efforts to further increase pilot SA led to the development of a mixed reality chase view piloting interface. Chase view is similar to a view of being towed behind the aircraft. It combines real world onboard camera images with a virtual representation of the vehicle and the surrounding operating environment. A series of UAV piloting experiments were performed using the training and evaluation systems described earlier. Subjects' behavioral performance while using the onboard camera view and the mixed reality chase view interface during missions was analyzed. Subjects' cognitive workload during missions was also assessed using subjective measures such as NASA task load index and non-subjective brain activity measurements using a functional Infrared Spectroscopy (fNIR) system. Behavioral analysis showed that the chase view interface improved pilot performance in near Earth ights and increased their situational awareness. fNIR analysis showed that a subjects cognitive workload was signi cantly less while using the chase view interface. Real world ight tests were conducted in a near Earth environment with buildings and obstacles to evaluate the chase view interface with real world data. The interface performed very well with real world, real time data in close range scenarios.The mixed reality approaches presented follow studies on human factors performance and cognitive loading. The resulting designs serve as test beds for studying UAV pilot performance, creating training programs, and developing tools to augment UAV operations and minimize UAV accidents during operations in near Earth environments.Ph.D., Mechanical Engineering -- Drexel University, 201

    A Shared-Control Teleoperation Architecture for Nonprehensile Object Transportation

    Get PDF
    This article proposes a shared-control teleoperation architecture for robot manipulators transporting an object on a tray. Differently from many existing studies about remotely operated robots with firm grasping capabilities, we consider the case in which, in principle, the object can break its contact with the robot end-effector. The proposed shared-control approach automatically regulates the remote robot motion commanded by the user and the end-effector orientation to prevent the object from sliding over the tray. Furthermore, the human operator is provided with haptic cues informing about the discrepancy between the commanded and executed robot motion, which assist the operator throughout the task execution. We carried out trajectory tracking experiments employing an autonomous 7-degree-of-freedom (DoF) manipulator and compared the results obtained using the proposed approach with two different control schemes (i.e., constant tray orientation and no motion adjustment). We also carried out a human-subjects study involving 18 participants in which a 3-DoF haptic device was used to teleoperate the robot linear motion and display haptic cues to the operator. In all experiments, the results clearly show that our control approach outperforms the other solutions in terms of sliding prevention, robustness, commands tracking, and user’s preference

    Intuitive Robot Teleoperation Based on Haptic Feedback and 3D Visualization

    Get PDF
    Robots are required in many jobs. The jobs related to tele-operation may be very challenging and often require reaching a destination quickly and with minimum collisions. In order to succeed in these jobs, human operators are asked to tele-operate a robot manually through a user interface. The design of a user interface and of the information provided in it, become therefore critical elements for the successful completion of robot tele-operation tasks. Effective and timely robot tele-navigation mainly relies on the intuitiveness provided by the interface and on the richness and presentation of the feedback given. This project investigated the use of both haptic and visual feedbacks in a user interface for robot tele-navigation. The aim was to overcome some of the limitations observed in a state of the art works, turning what is sometimes described as contrasting into an added value to improve tele-navigation performance. The key issue is to combine different human sensory modalities in a coherent way and to benefit from 3-D vision too. The proposed new approach was inspired by how visually impaired people use walking sticks to navigate. Haptic feedback may provide helpful input to a user to comprehend distances to surrounding obstacles and information about the obstacle distribution. This was proposed to be achieved entirely relying on on-board range sensors, and by processing this input through a simple scheme that regulates magnitude and direction of the environmental force-feedback provided to the haptic device. A specific algorithm was also used to render the distribution of very close objects to provide appropriate touch sensations. Scene visualization was provided by the system and it was shown to a user coherently to haptic sensation. Different visualization configurations, from multi-viewpoint observation to 3-D visualization, were proposed and rigorously assessed through experimentations, to understand the advantages of the proposed approach and performance variations among different 3-D display technologies. Over twenty users were invited to participate in a usability study composed by two major experiments. The first experiment focused on a comparison between the proposed haptic-feedback strategy and a typical state of the art approach. It included testing with a multi-viewpoint visual observation. The second experiment investigated the performance of the proposed haptic-feedback strategy when combined with three different stereoscopic-3D visualization technologies. The results from the experiments were encouraging and showed good performance with the proposed approach and an improvement over literature approaches to haptic feedback in robot tele-operation. It was also demonstrated that 3-D visualization can be beneficial for robot tele-navigation and it will not contrast with haptic feedback if it is properly aligned to it. Performance may vary with different 3-D visualization technologies, which is also discussed in the presented work
    corecore