110 research outputs found

    Autonomous Execution of Cinematographic Shots with Multiple Drones

    Full text link
    This paper presents a system for the execution of autonomous cinematography missions with a team of drones. The system allows media directors to design missions involving different types of shots with one or multiple cameras, running sequentially or concurrently. We introduce the complete architecture, which includes components for mission design, planning and execution. Then, we focus on the components related to autonomous mission execution. First, we propose a novel parametric description for shots, considering different types of camera motion and tracked targets; and we use it to implement a set of canonical shots. Second, for multi-drone shot execution, we propose distributed schedulers that activate different shot controllers on board the drones. Moreover, an event-based mechanism is used to synchronize shot execution among the drones and to account for inaccuracies during shot planning. Finally, we showcase the system with field experiments filming sport activities, including a real regatta event. We report on system integration and lessons learnt during our experimental campaigns

    Optimal Trajectory Planning for Cinematography with Multiple Unmanned Aerial Vehicles

    Full text link
    This paper presents a method for planning optimal trajectories with a team of Unmanned Aerial Vehicles (UAVs) performing autonomous cinematography. The method is able to plan trajectories online and in a distributed manner, providing coordination between the UAVs. We propose a novel non-linear formulation for this challenging problem of computing multi-UAV optimal trajectories for cinematography; integrating UAVs dynamics and collision avoidance constraints, together with cinematographic aspects like smoothness, gimbal mechanical limits and mutual camera visibility. We integrate our method within a hardware and software architecture for UAV cinematography that was previously developed within the framework of the MultiDrone project; and demonstrate its use with different types of shots filming a moving target outdoors. We provide extensive experimental results both in simulation and field experiments. We analyze the performance of the method and prove that it is able to compute online smooth trajectories, reducing jerky movements and complying with cinematography constraints.Comment: This paper has been published as: Optimal trajectory planning for cinematography with multiple Unmanned Aerial Vehicles. Alfonso Alcantara and Jesus Capitan and Rita Cunha and Anibal Ollero. Robotics and Autonomous Systems. 103778 (2021) 10.1016/j.robot.2021.10377

    Optimal Multi-UAV Trajectory Planning for Filming Applications

    Get PDF
    Teams of multiple Unmanned Aerial Vehicles (UAVs) can be used to record large-scale outdoor scenarios and complementary views of several action points as a promising system for cinematic video recording. Generating the trajectories of the UAVs plays a key role, as it should be ensured that they comply with requirements for system dynamics, smoothness, and safety. The rise of numerical methods for nonlinear optimization is finding a ourishing field in optimization-based approaches to multi- UAV trajectory planning. In particular, these methods are rather promising for video recording applications, as they enable multiple constraints and objectives to be formulated, such as trajectory smoothness, compliance with UAV and camera dynamics, avoidance of obstacles and inter-UAV con icts, and mutual UAV visibility. The main objective of this thesis is to plan online trajectories for multi-UAV teams in video applications, formulating novel optimization problems and solving them in real time. The thesis begins by presenting a framework for carrying out autonomous cinematography missions with a team of UAVs. This framework enables media directors to design missions involving different types of shots with one or multiple cameras, running sequentially or concurrently. Second, the thesis proposes a novel non-linear formulation for the challenging problem of computing optimal multi-UAV trajectories for cinematography, integrating UAV dynamics and collision avoidance constraints, together with cinematographic aspects such as smoothness, gimbal mechanical limits, and mutual camera visibility. Lastly, the thesis describes a method for autonomous aerial recording with distributed lighting by a team of UAVs. The multi-UAV trajectory optimization problem is decoupled into two steps in order to tackle non-linear cinematographic aspects and obstacle avoidance at separate stages. This allows the trajectory planner to perform in real time and to react online to changes in dynamic environments. It is important to note that all the methods in the thesis have been validated by means of extensive simulations and field experiments. Moreover, all the software components have been developed as open source.Los equipos de vehículos aéreos no tripulados (UAV) son sistemas prometedores para grabar eventos cinematográficos, en escenarios exteriores de grandes dimensiones difíciles de cubrir o para tomar vistas complementarias de diferentes puntos de acción. La generación de trayectorias para este tipo de vehículos desempeña un papel fundamental, ya que debe garantizarse que se cumplan requisitos dinámicos, de suavidad y de seguridad. Los enfoques basados en la optimización para la planificación de trayectorias de múltiples UAVs se pueden ver beneficiados por el auge de los métodos numéricos para la resolución de problemas de optimización no lineales. En particular, estos métodos son bastante prometedores para las aplicaciones de grabación de vídeo, ya que permiten formular múltiples restricciones y objetivos, como la suavidad de la trayectoria, el cumplimiento de la dinámica del UAV y de la cámara, la evitación de obstáculos y de conflictos entre UAVs, y la visibilidad mutua. El objetivo principal de esta tesis es planificar trayectorias para equipos multi-UAV en aplicaciones de vídeo, formulando novedosos problemas de optimización y resolviéndolos en tiempo real. La tesis comienza presentando un marco de trabajo para la realización de misiones cinematográficas autónomas con un equipo de UAVs. Este marco permite a los directores de medios de comunicación diseñar misiones que incluyan diferentes tipos de tomas con una o varias cámaras, ejecutadas de forma secuencial o concurrente. En segundo lugar, la tesis propone una novedosa formulación no lineal para el difícil problema de calcular las trayectorias óptimas de los vehículos aéreos no tripulados en cinematografía, integrando en el problema la dinámica de los UAVs y las restricciones para evitar colisiones, junto con aspectos cinematográficos como la suavidad, los límites mecánicos del cardán y la visibilidad mutua de las cámaras. Por último, la tesis describe un método de grabación aérea autónoma con iluminación distribuida por un equipo de UAVs. El problema de optimización de trayectorias se desacopla en dos pasos para abordar los aspectos cinematográficos no lineales y la evitación de obstáculos en etapas separadas. Esto permite al planificador de trayectorias actuar en tiempo real y reaccionar en línea a los cambios en los entornos dinámicos. Es importante señalar que todos los métodos de la tesis han sido validados mediante extensas simulaciones y experimentos de campo. Además, todos los componentes del software se han desarrollado como código abierto

    A Multiple-UAV Software Architecture for Autonomous Media Production

    Get PDF
    The use of UAVs in media production has taken off during the past few years, with increasingly more functions becoming automated. However, current solutions leave a lot to be desired with regard to autonomy and drone fleet support. This paper presents a novel, complete software architecture suited to an intelligent, multiple-UAV platform for media production/cinematography applications, covering outdoor events (e.g., sports) typically distributed over large expanses. Increased multiple drone decisional autonomy, so as to minimize production crew load, and improved multiple drone robustness/safety mechanisms (e.g., regarding communications, flight regulation compliance, crowd avoidance and emergency landing mechanisms) are supported.publishersversionpublishe

    Director Tools for Autonomous Media Production with a Team of Drones

    Get PDF
    Featured Application: This work can be applied for media production with aerial cameras. The system supports media crew to film outdoor events with an autonomous fleet of drones.This paper proposes a set of director tools for autonomous media production with a team of drones. There is a clear trend toward using drones for media production, and the director is the person in charge of the whole system from a production perspective. Many applications, mainly outdoors, can benefit from the use of multiple drones to achieve multi-view or concurrent shots. However, there is a burden associated with managing all aspects in the system, such as ensuring safety, accounting for drone battery levels, navigating drones, etc. Even though there exist methods for autonomous mission planning with teams of drones, a media director is not necessarily familiar with them and their language. We contribute to close this gap between media crew and autonomous multi-drone systems, allowing the director to focus on the artistic part. In particular, we propose a novel language for cinematography mission description and a procedure to translate those missions into plans that can be executed by autonomous drones. We also present our director’s Dashboard, a graphical tool allowing the director to describe missions for media production easily. Our tools have been integrated into a real team of drones for media production and we show results of example missions.Unión Europea Sub.No 73166

    CineTransfer: Controlling a Robot to Imitate Cinematographic Style from a Single Example

    Full text link
    This work presents CineTransfer, an algorithmic framework that drives a robot to record a video sequence that mimics the cinematographic style of an input video. We propose features that abstract the aesthetic style of the input video, so the robot can transfer this style to a scene with visual details that are significantly different from the input video. The framework builds upon CineMPC, a tool that allows users to control cinematographic features, like subjects' position on the image and the depth of field, by manipulating the intrinsics and extrinsics of a cinematographic camera. However, CineMPC requires a human expert to specify the desired style of the shot (composition, camera motion, zoom, focus, etc). CineTransfer bridges this gap, aiming a fully autonomous cinematographic platform. The user chooses a single input video as a style guide. CineTransfer extracts and optimizes two important style features, the composition of the subject in the image and the scene depth of field, and provides instructions for CineMPC to control the robot to record an output sequence that matches these features as closely as possible. In contrast with other style transfer methods, our approach is a lightweight and portable framework which does not require deep network training or extensive datasets. Experiments with real and simulated videos demonstrate the system's ability to analyze and transfer style between recordings, and are available in the supplementary video
    corecore