113 research outputs found

    Connectivity-Preserving Swarm Teleoperation With A Tree Network

    Full text link
    During swarm teleoperation, the human operator may threaten the distance-dependent inter-robot communications and, with them, the connectivity of the slave swarm. To prevent the harmful component of the human command from disconnecting the swarm network, this paper develops a constructive strategy to dynamically modulate the interconnections of, and the locally injected damping at, all slave robots. By Lyapunov-based set invariance analysis, the explicit law for updating that control gains has been rigorously proven to synchronize the slave swarm while preserving all interaction links in the tree network. By properly limiting the impact of the user command rather than rejecting it entirely, the proposed control law enables the human operator to guide the motion of the slave swarm to the extent to which it does not endanger the connectivity of the swarm network. Experiment results demonstrate that the proposed strategy can maintain the connectivity of the tree network during swarm teleoperation

    Application of Simultaneous Localization and Mapping Algorithms for Haptic Teleoperation of Aerial Vehicles

    Get PDF
    In this thesis, a new type of haptic teleoperator system for remote control of Unmanned Aerial Vehicles (UAVs) has been developed, where the Simultaneous Localization and Mapping (SLAM) algorithms are implemented for the purpose of generating the haptic feedback. Specifically, the haptic feedback is provided to the human operator through interaction with artificial potential field built around the obstacles in the virtual environment which is located at the master site of the teleoperator system. The obstacles in the virtual environment replicate essential features of the actual remote environment where the UAV executes its tasks. The state of the virtual environment is generated and updated in real time using Extended Kalman Filter SLAM algorithms based on measurements performed by the UAV in the actual remote environment. Two methods for building haptic feedback from SLAM algorithms have been developed. The basic SLAM-based haptic feedback algorithm uses fixed size potential field around the obstacles, while the robust SLAM-based haptic feedback algorithm changes the size of potential field around the obstacle depending on the amount of uncertainty in obstacle location, which is represented by the covariance estimate provided by EKF. Simulations and experimental results are presented that evaluate the performance of the proposed teleoperator system

    Dynamic virtual reality user interface for teleoperation of heterogeneous robot teams

    Full text link
    This research investigates the possibility to improve current teleoperation control for heterogeneous robot teams using modern Human-Computer Interaction (HCI) techniques such as Virtual Reality. It proposes a dynamic teleoperation Virtual Reality User Interface (VRUI) framework to improve the current approach to teleoperating heterogeneous robot teams

    Minimum-time trajectory generation for quadrotors in constrained environments

    Full text link
    In this paper, we present a novel strategy to compute minimum-time trajectories for quadrotors in constrained environments. In particular, we consider the motion in a given flying region with obstacles and take into account the physical limitations of the vehicle. Instead of approaching the optimization problem in its standard time-parameterized formulation, the proposed strategy is based on an appealing re-formulation. Transverse coordinates, expressing the distance from a frame path, are used to parameterise the vehicle position and a spatial parameter is used as independent variable. This re-formulation allows us to (i) obtain a fixed horizon problem and (ii) easily formulate (fairly complex) position constraints. The effectiveness of the proposed strategy is proven by numerical computations on two different illustrative scenarios. Moreover, the optimal trajectory generated in the second scenario is experimentally executed with a real nano-quadrotor in order to show its feasibility.Comment: arXiv admin note: text overlap with arXiv:1702.0427

    Development and evaluation of a collision avoidance system for supervisory control of a micro aerial vehicle

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Aeronautics and Astronautics, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 195-108).Recent technological advances have enabled Unmanned Aerial Vehicles (UAVs) and Micro Aerial Vehicles (MAVs) to become increasingly prevalent in a variety of domains. From military surveillance to disaster relief to search-and-rescue tasks, these systems have the capacity to assist in difficult or dangerous tasks and to potentially save lives. To enable operation by minimally trained personnel, the control interfaces require increased usability in order to maintain safety and mission effectiveness. In particular, as these systems are used in the real world, the operator must be able to navigate around obstacles in unknown and unstructured environments. In order to address this problem, the Collision and Obstacle Detection and Alerting (CODA) display was designed and integrated into a smartphone-based MAV control interface. The CODA display uses a combination of visual and haptic alerts to warn the operator of potential obstacles in the environment to help the operator navigate more effectively and avoid collisions. To assess the usability of this system, a within-subjects experiment was conducted in which participants used the mobile interface to pilot a MAV both with and without the assistance of the CODA display. The task consisted of navigating though a simulated indoor environment and locating visual targets. Metrics for the two conditions examined performance, control strategies, and subjective feedback from each participant. Overall, the addition of the CODA display resulted in higher performance, lowering the crash rate and decreasing the amount of time required to complete the tasks. Despite increasing the complexity of the interface, adding the CODA display did not significantly impact usability, and participants preferred operating the MAV with the CODA display. These results demonstrate that the CODA display provides the basis for an effective alerting tool to assist with MAV operation for exploring unknown environments. Future work should explore expansion to three-dimensional sensing and alerting capabilities as well as validation in an outdoor environment.by Kimberly F. Jackson.S.M

    Human-robot interaction for telemanipulation by small unmanned aerial systems

    Get PDF
    This dissertation investigated the human-robot interaction (HRI) for the Mission Specialist role in a telemanipulating unmanned aerial system (UAS). The emergence of commercial unmanned aerial vehicle (UAV) platforms transformed the civil and environmental engineering industries through applications such as surveying, remote infrastructure inspection, and construction monitoring, which normally use UAVs for visual inspection only. Recent developments, however, suggest that performing physical interactions in dynamic environments will be important tasks for future UAS, particularly in applications such as environmental sampling and infrastructure testing. In all domains, the availability of a Mission Specialist to monitor the interaction and intervene when necessary is essential for successful deployments. Additionally, manual operation is the default mode for safety reasons; therefore, understanding Mission Specialist HRI is important for all small telemanipulating UAS in civil engineering, regardless of system autonomy and application. A 5 subject exploratory study and a 36 subject experimental study were conducted to evaluate variations of a dedicated, mobile Mission Specialist interface for aerial telemanipulation from a small UAV. The Shared Roles Model was used to model the UAS human-robot team, and the Mission Specialist and Pilot roles were informed by the current state of practice for manipulating UAVs. Three interface camera view designs were tested using a within-subjects design, which included an egocentric view (perspective from the manipulator), exocentric view (perspective from the UAV), and mixed egocentric-exocentric view. The experimental trials required Mission Specialist participants to complete a series of tasks with physical, visual, and verbal requirements. Results from these studies found that subjects who preferred the exocentric condition performed tasks 50% faster when using their preferred interface; however, interface preferences did not affect performance for participants who preferred the mixed condition. This result led to a second finding that participants who preferred the exocentric condition were distracted by the egocentric view during the mixed condition, likely caused by cognitive tunneling, and the data suggest tradeoffs between performance improvements and attentional costs when adding information in the form of multiple views to the Mission Specialist interface. Additionally, based on this empirical evaluation of multiple camera views, the exocentric view was recommended for use in a dedicated Mission Specialist telemanipulation interface. Contributions of this thesis include: i) conducting the first focused HRI study of aerial telemanipulation, ii) development of an evaluative model for telemanipulation performance, iii) creation of new recommendations for aerial telemanipulation interfacing, and iv) contribution of code, hardware designs, and system architectures to the open-source UAV community. The evaluative model provides a detailed framework, a complement to the abstraction of the Shared Roles Model, that can be used to measure the effects of changes in the system, environment, operators, and interfacing factors on performance. The practical contributions of this work will expedite the use of manipulating UAV technologies by scientists, researchers, and stakeholders, particularly those in civil engineering, who will directly benefit from improved manipulating UAV performance

    Virtual reality aided vehicle teleoperation

    Get PDF
    This thesis describes a novel approach to vehicle teleoperation. Vehicle teleoperation is the human mediated control of a vehicle from a remote location. Typical methods for providing updates of the world around the vehicle use vehicle mounted video cameras. This methodology suffers from two problems: lag and limited field of view. Lag is the amount of time it takes for a signal to travel from the operator\u27s location to the vehicle. This lag causes the images from the camera and commands from the operator to be delayed. This behavior is a serious problem when the vehicle is approaching an obstacle. If the delay is long enough, the vehicle might crash into an obstacle before the operator knows that it is there. To complicate matters, most cameras provide only a small arc of visibility around the vehicle that leaves a significant blind spot. Therefore, hazards close to the vehicle might not be visible to the operator, such as a rock behind and to the left of the vehicle. In that case, if the vehicle were maneuvered sharply to the left, it might impact the rock. Virtual reality has been used to attack these two problems. A simulation of the vehicle is used to predict its positional response to inputs. This response is then displayed in a virtual world that mimics the operational environment. A dynamics algorithm called the wagon tongue method is used by a computer at the remote site to correct for inaccuracies between the simulated vehicle position and the actual vehicle position. The wagon tongue method eliminates the effect of the average lag value. Synchronization code is used to ensure that the vehicle executes commands with the same amount of time between them as when the operator issued them. This system behavior eliminates the effects of lag variation. The problem of limited field of view is solved by using a virtual camera viewpoint behind the vehicle that displays the entire world around the vehicle. This thesis develops and compares a system using virtual reality aided teleoperation with direct control and vehicle mounted camera aided teleoperation
    corecore