6 research outputs found

    Autonomous Quadcopter Videographer

    Get PDF
    In recent years, the interest in quadcopters as a robotics platform for autonomous photography has increased. This is due to their small size and mobility, which allow them to reach places that are difficult or even impossible for humans. This thesis focuses on the design of an autonomous quadcopter videographer, i.e. a quadcopter capable of capturing good footage of a specific subject. In order to obtain this footage, the system needs to choose appropriate vantage points and control the quadcopter. Skilled human videographers can easily spot good filming locations where the subject and its actions can be seen clearly in the resulting video footage, but translating this knowledge to a robot can be complex. We present an autonomous system implemented on a commercially available quadcopter that achieves this using only the monocular information and an accelerometer. Our system has two vantage point selection strategies: 1) a reactive approach, which moves the robot to a fixed location with respect to the human and 2) the combination of the reactive approach and a POMDP planner that considers the target\u27s movement intentions. We compare the behavior of these two approaches under different target movement scenarios. The results show that the POMDP planner obtains more stable footage with less quadcopter motion

    Guided Autonomy for Quadcopter Photography

    Get PDF
    Photographing small objects with a quadcopter is non-trivial to perform with many common user interfaces, especially when it requires maneuvering an Unmanned Aerial Vehicle (C) to difficult angles in order to shoot high perspectives. The aim of this research is to employ machine learning to support better user interfaces for quadcopter photography. Human Robot Interaction (HRI) is supported by visual servoing, a specialized vision system for real-time object detection, and control policies acquired through reinforcement learning (RL). Two investigations of guided autonomy were conducted. In the first, the user directed the quadcopter with a sketch based interface, and periods of user direction were interspersed with periods of autonomous flight. In the second, the user directs the quadcopter by taking a single photo with a handheld mobile device, and the quadcopter autonomously flies to the requested vantage point. This dissertation focuses on the following problems: 1) evaluating different user interface paradigms for dynamic photography in a GPS-denied environment; 2) learning better Convolutional Neural Network (CNN) object detection models to assure a higher precision in detecting human subjects than the currently available state-of-the-art fast models; 3) transferring learning from the Gazebo simulation into the real world; 4) learning robust control policies using deep reinforcement learning to maneuver the quadcopter to multiple shooting positions with minimal human interaction

    Autonomous Execution of Cinematographic Shots with Multiple Drones

    Full text link
    This paper presents a system for the execution of autonomous cinematography missions with a team of drones. The system allows media directors to design missions involving different types of shots with one or multiple cameras, running sequentially or concurrently. We introduce the complete architecture, which includes components for mission design, planning and execution. Then, we focus on the components related to autonomous mission execution. First, we propose a novel parametric description for shots, considering different types of camera motion and tracked targets; and we use it to implement a set of canonical shots. Second, for multi-drone shot execution, we propose distributed schedulers that activate different shot controllers on board the drones. Moreover, an event-based mechanism is used to synchronize shot execution among the drones and to account for inaccuracies during shot planning. Finally, we showcase the system with field experiments filming sport activities, including a real regatta event. We report on system integration and lessons learnt during our experimental campaigns

    Selecting Vantage Points For An Autonomous Quadcopter Videographer

    No full text
    A good human videographer is adept at selecting the best vantage points from which to film a subject. The aim of our research is to create an autonomous quad-copter that is similarly skilled at capturing good photographs of a moving subject. Due to their small size and mobility, quadcopters are well-suited to act as videographers and are capable of shooting from locations that are unreachable for a human. This paper evaluates the performance of two vantage point selection strategies: 1) a reactive controller that tracks the subject\u27s head pose and 2) combining the reactive system with a POMDP planner that considers the target\u27s movement intentions. Incorporating a POMDP planner into the system results in more stable footage and less quad-copter motion

    Mobilizing the Past for a Digital Future : The Potential of Digital Archaeology

    Get PDF
    Mobilizing the Past is a collection of 20 articles that explore the use and impact of mobile digital technology in archaeological field practice. The detailed case studies present in this volume range from drones in the Andes to iPads at Pompeii, digital workflows in the American Southwest, and examples of how bespoke, DIY, and commercial software provide solutions and craft novel challenges for field archaeologists. The range of projects and contexts ensures that Mobilizing the Past for a Digital Future is far more than a state-of-the-field manual or technical handbook. Instead, the contributors embrace the growing spirit of critique present in digital archaeology. This critical edge, backed by real projects, systems, and experiences, gives the book lasting value as both a glimpse into present practices as well as the anxieties and enthusiasm associated with the most recent generation of mobile digital tools. This book emerged from a workshop funded by the National Endowment for the Humanities held in 2015 at Wentworth Institute of Technology in Boston. The workshop brought together over 20 leading practitioners of digital archaeology in the U.S. for a weekend of conversation. The papers in this volume reflect the discussions at this workshop with significant additional content. Starting with an expansive introduction and concluding with a series of reflective papers, this volume illustrates how tablets, connectivity, sophisticated software, and powerful computers have transformed field practices and offer potential for a radically transformed discipline.https://dc.uwm.edu/arthist_mobilizingthepast/1000/thumbnail.jp
    corecore