27 research outputs found

    Hand-worn Haptic Interface for Drone Teleoperation

    Full text link
    Drone teleoperation is usually accomplished using remote radio controllers, devices that can be hard to master for inexperienced users. Moreover, the limited amount of information fed back to the user about the robot's state, often limited to vision, can represent a bottleneck for operation in several conditions. In this work, we present a wearable interface for drone teleoperation and its evaluation through a user study. The two main features of the proposed system are a data glove to allow the user to control the drone trajectory by hand motion and a haptic system used to augment their awareness of the environment surrounding the robot. This interface can be employed for the operation of robotic systems in line of sight (LoS) by inexperienced operators and allows them to safely perform tasks common in inspection and search-and-rescue missions such as approaching walls and crossing narrow passages with limited visibility conditions. In addition to the design and implementation of the wearable interface, we performed a systematic study to assess the effectiveness of the system through three user studies (n = 36) to evaluate the users' learning path and their ability to perform tasks with limited visibility. We validated our ideas in both a simulated and a real-world environment. Our results demonstrate that the proposed system can improve teleoperation performance in different cases compared to standard remote controllers, making it a viable alternative to standard Human-Robot Interfaces.Comment: Accepted at the IEEE International Conference on Robotics and Automation (ICRA) 202

    Autonomous Execution of Cinematographic Shots with Multiple Drones

    Full text link
    This paper presents a system for the execution of autonomous cinematography missions with a team of drones. The system allows media directors to design missions involving different types of shots with one or multiple cameras, running sequentially or concurrently. We introduce the complete architecture, which includes components for mission design, planning and execution. Then, we focus on the components related to autonomous mission execution. First, we propose a novel parametric description for shots, considering different types of camera motion and tracked targets; and we use it to implement a set of canonical shots. Second, for multi-drone shot execution, we propose distributed schedulers that activate different shot controllers on board the drones. Moreover, an event-based mechanism is used to synchronize shot execution among the drones and to account for inaccuracies during shot planning. Finally, we showcase the system with field experiments filming sport activities, including a real regatta event. We report on system integration and lessons learnt during our experimental campaigns

    Guided Autonomy for Quadcopter Photography

    Get PDF
    Photographing small objects with a quadcopter is non-trivial to perform with many common user interfaces, especially when it requires maneuvering an Unmanned Aerial Vehicle (C) to difficult angles in order to shoot high perspectives. The aim of this research is to employ machine learning to support better user interfaces for quadcopter photography. Human Robot Interaction (HRI) is supported by visual servoing, a specialized vision system for real-time object detection, and control policies acquired through reinforcement learning (RL). Two investigations of guided autonomy were conducted. In the first, the user directed the quadcopter with a sketch based interface, and periods of user direction were interspersed with periods of autonomous flight. In the second, the user directs the quadcopter by taking a single photo with a handheld mobile device, and the quadcopter autonomously flies to the requested vantage point. This dissertation focuses on the following problems: 1) evaluating different user interface paradigms for dynamic photography in a GPS-denied environment; 2) learning better Convolutional Neural Network (CNN) object detection models to assure a higher precision in detecting human subjects than the currently available state-of-the-art fast models; 3) transferring learning from the Gazebo simulation into the real world; 4) learning robust control policies using deep reinforcement learning to maneuver the quadcopter to multiple shooting positions with minimal human interaction

    Proceedings of the International Micro Air Vehicles Conference and Flight Competition 2017 (IMAV 2017)

    Get PDF
    The IMAV 2017 conference has been held at ISAE-SUPAERO, Toulouse, France from Sept. 18 to Sept. 21, 2017. More than 250 participants coming from 30 different countries worldwide have presented their latest research activities in the field of drones. 38 papers have been presented during the conference including various topics such as Aerodynamics, Aeroacoustics, Propulsion, Autopilots, Sensors, Communication systems, Mission planning techniques, Artificial Intelligence, Human-machine cooperation as applied to drones

    Design and Modeling of Smartphone Controlled Vehicle

    Get PDF
    While many have worked on the transition phases of more popular hybrid aerial vehicle configurations, In this paper, we explore a novel multi-mode hybrid Unmanned Aerial Vehicle (UAV). Due to its expanded flying range and adaptability, hybrid aerial vehicles—which integrates two or more operating configurations—have become more and more widespread. The stages of transition between these modes are reasonably important whether there are two or more flight forms present. Whereas numerous have worked on the early stages of more widely used hybrid aerial vehicle types, in this paper a brand-new multi-mode hybrid UAV will be investigated. In order to fully exploit the vehicle's propulsion equipment and aerodynamic surfaces in both a horizontal cruising configuration and a vertical hovering configuration, we combine a tailless fixed-wing with a four-wing monocopter. By increasing construction integrity over the whole operational range, this lowers drag and wasteful mass when the aircraft is in motion in both modes. The transformation between the two flight states can be carried out in midair with just its current flying actuators and sensors. Through a ground controller, this vehicle may be operated by an Android device

    Human-robot interaction for telemanipulation by small unmanned aerial systems

    Get PDF
    This dissertation investigated the human-robot interaction (HRI) for the Mission Specialist role in a telemanipulating unmanned aerial system (UAS). The emergence of commercial unmanned aerial vehicle (UAV) platforms transformed the civil and environmental engineering industries through applications such as surveying, remote infrastructure inspection, and construction monitoring, which normally use UAVs for visual inspection only. Recent developments, however, suggest that performing physical interactions in dynamic environments will be important tasks for future UAS, particularly in applications such as environmental sampling and infrastructure testing. In all domains, the availability of a Mission Specialist to monitor the interaction and intervene when necessary is essential for successful deployments. Additionally, manual operation is the default mode for safety reasons; therefore, understanding Mission Specialist HRI is important for all small telemanipulating UAS in civil engineering, regardless of system autonomy and application. A 5 subject exploratory study and a 36 subject experimental study were conducted to evaluate variations of a dedicated, mobile Mission Specialist interface for aerial telemanipulation from a small UAV. The Shared Roles Model was used to model the UAS human-robot team, and the Mission Specialist and Pilot roles were informed by the current state of practice for manipulating UAVs. Three interface camera view designs were tested using a within-subjects design, which included an egocentric view (perspective from the manipulator), exocentric view (perspective from the UAV), and mixed egocentric-exocentric view. The experimental trials required Mission Specialist participants to complete a series of tasks with physical, visual, and verbal requirements. Results from these studies found that subjects who preferred the exocentric condition performed tasks 50% faster when using their preferred interface; however, interface preferences did not affect performance for participants who preferred the mixed condition. This result led to a second finding that participants who preferred the exocentric condition were distracted by the egocentric view during the mixed condition, likely caused by cognitive tunneling, and the data suggest tradeoffs between performance improvements and attentional costs when adding information in the form of multiple views to the Mission Specialist interface. Additionally, based on this empirical evaluation of multiple camera views, the exocentric view was recommended for use in a dedicated Mission Specialist telemanipulation interface. Contributions of this thesis include: i) conducting the first focused HRI study of aerial telemanipulation, ii) development of an evaluative model for telemanipulation performance, iii) creation of new recommendations for aerial telemanipulation interfacing, and iv) contribution of code, hardware designs, and system architectures to the open-source UAV community. The evaluative model provides a detailed framework, a complement to the abstraction of the Shared Roles Model, that can be used to measure the effects of changes in the system, environment, operators, and interfacing factors on performance. The practical contributions of this work will expedite the use of manipulating UAV technologies by scientists, researchers, and stakeholders, particularly those in civil engineering, who will directly benefit from improved manipulating UAV performance

    What do Collaborations with the Arts Have to Say About Human-Robot Interaction?

    Get PDF
    This is a collection of papers presented at the workshop What Do Collaborations with the Arts Have to Say About HRI , held at the 2010 Human-Robot Interaction Conference, in Osaka, Japan

    Dynamic Coverage Control and Estimation in Collaborative Networks of Human-Aerial/Space Co-Robots

    Full text link
    In this dissertation, the author presents a set of control, estimation, and decision making strategies to enable small unmanned aircraft systems and free-flying space robots to act as intelligent mobile wireless sensor networks. These agents are primarily tasked with gathering information from their environments in order to increase the situational awareness of both the network as well as human collaborators. This information is gathered through an abstract sensing model, a forward facing anisotropic spherical sector, which can be generalized to various sensing models through adjustment of its tuning parameters. First, a hybrid control strategy is derived whereby a team of unmanned aerial vehicles can dynamically cover (i.e., sweep their sensing footprints through all points of a domain over time) a designated airspace. These vehicles are assumed to have finite power resources; therefore, an agent deployment and scheduling protocol is proposed that allows for agents to return periodically to a charging station while covering the environment. Rules are also prescribed with respect to energy-aware domain partitioning and agent waypoint selection so as to distribute the coverage load across the network with increased priority on those agents whose remaining power supply is larger. This work is extended to consider the coverage of 2D manifolds embedded in 3D space that are subject to collision by stochastic intruders. Formal guarantees are provided with respect to collision avoidance, timely convergence upon charging stations, and timely interception of intruders by friendly agents. This chapter concludes with a case study in which a human acts as a dynamic coverage supervisor, i.e., they use hand gestures so as to direct the selection of regions which ought to be surveyed by the robot. Second, the concept of situational awareness is extended to networks consisting of humans working in close proximity with aerial or space robots. In this work, the robot acts as an assistant to a human attempting to complete a set of interdependent and spatially separated multitasking objectives. The human wears an augmented reality display and the robot must learn the human's task locations online and broadcast camera views of these tasks to the human. The locations of tasks are learned using a parallel implementation of expectation maximization of Gaussian mixture models. The selection of tasks from this learned set is executed by a Markov Decision Process which is trained using Q-learning by the human. This method for robot task selection is compared against a supervised method in IRB approved (HUM00145810) experimental trials with 24 human subjects. This dissertation concludes by discussing an additional case study, by the author, in Bayesian inferred path planning. In addition, open problems in dynamic coverage and human-robot interaction are discussed so as to present an avenue forward for future work.PHDAerospace EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/155147/1/wbentz_1.pd
    corecore