222 research outputs found

    Generic Drone Control Platform for Autonomous Capture of Cinema Scenes

    Full text link
    The movie industry has been using Unmanned Aerial Vehicles as a new tool to produce more and more complex and aesthetic camera shots. However, the shooting process currently rely on manual control of the drones which makes it difficult and sometimes inconvenient to work with. In this paper we address the lack of autonomous system to operate generic rotary-wing drones for shooting purposes. We propose a global control architecture based on a high-level generic API used by many UAV. Our solution integrates a compound and coupled model of a generic rotary-wing drone and a Full State Feedback strategy. To address the specific task of capturing cinema scenes, we combine the control architecture with an automatic camera path planning approach that encompasses cinematographic techniques. The possibilities offered by our system are demonstrated through a series of experiments

    Autonomous Execution of Cinematographic Shots with Multiple Drones

    Full text link
    This paper presents a system for the execution of autonomous cinematography missions with a team of drones. The system allows media directors to design missions involving different types of shots with one or multiple cameras, running sequentially or concurrently. We introduce the complete architecture, which includes components for mission design, planning and execution. Then, we focus on the components related to autonomous mission execution. First, we propose a novel parametric description for shots, considering different types of camera motion and tracked targets; and we use it to implement a set of canonical shots. Second, for multi-drone shot execution, we propose distributed schedulers that activate different shot controllers on board the drones. Moreover, an event-based mechanism is used to synchronize shot execution among the drones and to account for inaccuracies during shot planning. Finally, we showcase the system with field experiments filming sport activities, including a real regatta event. We report on system integration and lessons learnt during our experimental campaigns

    Optimal Trajectory Planning for Cinematography with Multiple Unmanned Aerial Vehicles

    Full text link
    This paper presents a method for planning optimal trajectories with a team of Unmanned Aerial Vehicles (UAVs) performing autonomous cinematography. The method is able to plan trajectories online and in a distributed manner, providing coordination between the UAVs. We propose a novel non-linear formulation for this challenging problem of computing multi-UAV optimal trajectories for cinematography; integrating UAVs dynamics and collision avoidance constraints, together with cinematographic aspects like smoothness, gimbal mechanical limits and mutual camera visibility. We integrate our method within a hardware and software architecture for UAV cinematography that was previously developed within the framework of the MultiDrone project; and demonstrate its use with different types of shots filming a moving target outdoors. We provide extensive experimental results both in simulation and field experiments. We analyze the performance of the method and prove that it is able to compute online smooth trajectories, reducing jerky movements and complying with cinematography constraints.Comment: This paper has been published as: Optimal trajectory planning for cinematography with multiple Unmanned Aerial Vehicles. Alfonso Alcantara and Jesus Capitan and Rita Cunha and Anibal Ollero. Robotics and Autonomous Systems. 103778 (2021) 10.1016/j.robot.2021.10377

    Deep Reinforcement Learning with semi-expert distillation for autonomous UAV cinematography

    Get PDF
    Unmanned Aerial Vehicles (UAVs, or drones) have revolutionized modern media production. Being rapidly deployable “flying cameras”, they can easily capture aesthetically pleasing aerial footage of static or moving filming targets/subjects. Current approaches rely either on manual UAV/gimbal control by human experts or on a combination of complex computer vision algorithms and hardware configurations for automating the flight+flying process. This paper explores an efficient Deep Reinforcement Learning (DRL) alternative, which implicitly merges the target detection and path planning steps into a single algorithm. To achieve this, a baseline DRL approach is augmented with a novel policy distillation component, which transfers knowledge from a suitable, semi-expert Model Predictive Control (MPC) controller into the DRL agent. Thus, the latter is able to autonomously execute a specific UAV cinematography task with purely visual input. Unlike the MPC controller, the proposed DRL agent does not need to know the 3D world position of the filming target during inference. Experiments conducted in a photorealistic simulator showcase superior performance and training speed compared to the baseline agent while surpassing the MPC controller in terms of visual occlusion avoidance
    corecore