19 research outputs found

    Generic Drone Control Platform for Autonomous Capture of Cinema Scenes

    Full text link
    The movie industry has been using Unmanned Aerial Vehicles as a new tool to produce more and more complex and aesthetic camera shots. However, the shooting process currently rely on manual control of the drones which makes it difficult and sometimes inconvenient to work with. In this paper we address the lack of autonomous system to operate generic rotary-wing drones for shooting purposes. We propose a global control architecture based on a high-level generic API used by many UAV. Our solution integrates a compound and coupled model of a generic rotary-wing drone and a Full State Feedback strategy. To address the specific task of capturing cinema scenes, we combine the control architecture with an automatic camera path planning approach that encompasses cinematographic techniques. The possibilities offered by our system are demonstrated through a series of experiments

    Director Tools for Autonomous Media Production with a Team of Drones

    Get PDF
    Featured Application: This work can be applied for media production with aerial cameras. The system supports media crew to film outdoor events with an autonomous fleet of drones.This paper proposes a set of director tools for autonomous media production with a team of drones. There is a clear trend toward using drones for media production, and the director is the person in charge of the whole system from a production perspective. Many applications, mainly outdoors, can benefit from the use of multiple drones to achieve multi-view or concurrent shots. However, there is a burden associated with managing all aspects in the system, such as ensuring safety, accounting for drone battery levels, navigating drones, etc. Even though there exist methods for autonomous mission planning with teams of drones, a media director is not necessarily familiar with them and their language. We contribute to close this gap between media crew and autonomous multi-drone systems, allowing the director to focus on the artistic part. In particular, we propose a novel language for cinematography mission description and a procedure to translate those missions into plans that can be executed by autonomous drones. We also present our director’s Dashboard, a graphical tool allowing the director to describe missions for media production easily. Our tools have been integrated into a real team of drones for media production and we show results of example missions.Unión Europea Sub.No 73166

    Optimal Trajectory Planning for Cinematography with Multiple Unmanned Aerial Vehicles

    Full text link
    This paper presents a method for planning optimal trajectories with a team of Unmanned Aerial Vehicles (UAVs) performing autonomous cinematography. The method is able to plan trajectories online and in a distributed manner, providing coordination between the UAVs. We propose a novel non-linear formulation for this challenging problem of computing multi-UAV optimal trajectories for cinematography; integrating UAVs dynamics and collision avoidance constraints, together with cinematographic aspects like smoothness, gimbal mechanical limits and mutual camera visibility. We integrate our method within a hardware and software architecture for UAV cinematography that was previously developed within the framework of the MultiDrone project; and demonstrate its use with different types of shots filming a moving target outdoors. We provide extensive experimental results both in simulation and field experiments. We analyze the performance of the method and prove that it is able to compute online smooth trajectories, reducing jerky movements and complying with cinematography constraints.Comment: This paper has been published as: Optimal trajectory planning for cinematography with multiple Unmanned Aerial Vehicles. Alfonso Alcantara and Jesus Capitan and Rita Cunha and Anibal Ollero. Robotics and Autonomous Systems. 103778 (2021) 10.1016/j.robot.2021.10377

    Enabling Viewpoint Learning through Dynamic Label Generation

    Get PDF
    Optimal viewpoint prediction is an essential task in many computer graphics applications. Unfortunately, common viewpoint qualities suffer from two major drawbacks: dependency on clean surface meshes, which are not always available, and the lack of closed-form expressions, which requires a costly search involving rendering. To overcome these limitations we propose to separate viewpoint selection from rendering through an end-to-end learning approach, whereby we reduce the influence of the mesh quality by predicting viewpoints from unstructured point clouds instead of polygonal meshes. While this makes our approach insensitive to the mesh discretization during evaluation, it only becomes possible when resolving label ambiguities that arise in this context. Therefore, we additionally propose to incorporate the label generation into the training procedure, making the label decision adaptive to the current network predictions. We show how our proposed approach allows for learning viewpoint predictions for models from different object categories and for different viewpoint qualities. Additionally, we show that prediction times are reduced from several minutes to a fraction of a second, as compared to state-of-the-art (SOTA) viewpoint quality evaluation. We will further release the code and training data, which will to our knowledge be the biggest viewpoint quality dataset available

    Autonomous Execution of Cinematographic Shots with Multiple Drones

    Full text link
    This paper presents a system for the execution of autonomous cinematography missions with a team of drones. The system allows media directors to design missions involving different types of shots with one or multiple cameras, running sequentially or concurrently. We introduce the complete architecture, which includes components for mission design, planning and execution. Then, we focus on the components related to autonomous mission execution. First, we propose a novel parametric description for shots, considering different types of camera motion and tracked targets; and we use it to implement a set of canonical shots. Second, for multi-drone shot execution, we propose distributed schedulers that activate different shot controllers on board the drones. Moreover, an event-based mechanism is used to synchronize shot execution among the drones and to account for inaccuracies during shot planning. Finally, we showcase the system with field experiments filming sport activities, including a real regatta event. We report on system integration and lessons learnt during our experimental campaigns
    corecore