101 research outputs found
Optimal Trajectory Planning for Cinematography with Multiple Unmanned Aerial Vehicles
This paper presents a method for planning optimal trajectories with a team of
Unmanned Aerial Vehicles (UAVs) performing autonomous cinematography. The
method is able to plan trajectories online and in a distributed manner,
providing coordination between the UAVs. We propose a novel non-linear
formulation for this challenging problem of computing multi-UAV optimal
trajectories for cinematography; integrating UAVs dynamics and collision
avoidance constraints, together with cinematographic aspects like smoothness,
gimbal mechanical limits and mutual camera visibility. We integrate our method
within a hardware and software architecture for UAV cinematography that was
previously developed within the framework of the MultiDrone project; and
demonstrate its use with different types of shots filming a moving target
outdoors. We provide extensive experimental results both in simulation and
field experiments. We analyze the performance of the method and prove that it
is able to compute online smooth trajectories, reducing jerky movements and
complying with cinematography constraints.Comment: This paper has been published as: Optimal trajectory planning for
cinematography with multiple Unmanned Aerial Vehicles. Alfonso Alcantara and
Jesus Capitan and Rita Cunha and Anibal Ollero. Robotics and Autonomous
Systems. 103778 (2021) 10.1016/j.robot.2021.10377
Optimal Multi-UAV Trajectory Planning for Filming Applications
Teams of multiple Unmanned Aerial Vehicles (UAVs) can be used to record large-scale
outdoor scenarios and complementary views of several action points as a promising
system for cinematic video recording. Generating the trajectories of the UAVs plays
a key role, as it should be ensured that they comply with requirements for system
dynamics, smoothness, and safety. The rise of numerical methods for nonlinear
optimization is finding a
ourishing field in optimization-based approaches to multi-
UAV trajectory planning. In particular, these methods are rather promising for
video recording applications, as they enable multiple constraints and objectives to
be formulated, such as trajectory smoothness, compliance with UAV and camera
dynamics, avoidance of obstacles and inter-UAV con
icts, and mutual UAV visibility.
The main objective of this thesis is to plan online trajectories for multi-UAV teams in
video applications, formulating novel optimization problems and solving them in real
time.
The thesis begins by presenting a framework for carrying out autonomous cinematography
missions with a team of UAVs. This framework enables media directors
to design missions involving different types of shots with one or multiple cameras,
running sequentially or concurrently. Second, the thesis proposes a novel non-linear
formulation for the challenging problem of computing optimal multi-UAV trajectories
for cinematography, integrating UAV dynamics and collision avoidance constraints,
together with cinematographic aspects such as smoothness, gimbal mechanical limits,
and mutual camera visibility. Lastly, the thesis describes a method for autonomous
aerial recording with distributed lighting by a team of UAVs. The multi-UAV trajectory
optimization problem is decoupled into two steps in order to tackle non-linear cinematographic aspects and obstacle avoidance at separate stages. This allows the
trajectory planner to perform in real time and to react online to changes in dynamic
environments.
It is important to note that all the methods in the thesis have been validated
by means of extensive simulations and field experiments. Moreover, all the software
components have been developed as open source.Los equipos de vehículos aéreos no tripulados (UAV) son sistemas prometedores para grabar
eventos cinematográficos, en escenarios exteriores de grandes dimensiones difíciles de cubrir
o para tomar vistas complementarias de diferentes puntos de acción. La generación de
trayectorias para este tipo de vehículos desempeña un papel fundamental, ya que debe
garantizarse que se cumplan requisitos dinámicos, de suavidad y de seguridad.
Los enfoques basados en la optimización para la planificación de trayectorias de múltiples
UAVs se pueden ver beneficiados por el auge de los métodos numéricos para la resolución de
problemas de optimización no lineales. En particular, estos métodos son bastante
prometedores para las aplicaciones de grabación de vídeo, ya que permiten formular múltiples
restricciones y objetivos, como la suavidad de la trayectoria, el cumplimiento de la dinámica
del UAV y de la cámara, la evitación de obstáculos y de conflictos entre UAVs, y la visibilidad
mutua.
El objetivo principal de esta tesis es planificar trayectorias para equipos multi-UAV en
aplicaciones de vídeo, formulando novedosos problemas de optimización y resolviéndolos en
tiempo real.
La tesis comienza presentando un marco de trabajo para la realización de misiones
cinematográficas autónomas con un equipo de UAVs. Este marco permite a los directores de
medios de comunicación diseñar misiones que incluyan diferentes tipos de tomas con una o
varias cámaras, ejecutadas de forma secuencial o concurrente. En segundo lugar, la tesis
propone una novedosa formulación no lineal para el difícil problema de calcular las
trayectorias óptimas de los vehículos aéreos no tripulados en cinematografía, integrando en el
problema la dinámica de los UAVs y las restricciones para evitar colisiones, junto con aspectos
cinematográficos como la suavidad, los límites mecánicos del cardán y la visibilidad mutua de
las cámaras. Por último, la tesis describe un método de grabación aérea autónoma con
iluminación distribuida por un equipo de UAVs. El problema de optimización de trayectorias se
desacopla en dos pasos para abordar los aspectos cinematográficos no lineales y la evitación
de obstáculos en etapas separadas. Esto permite al planificador de trayectorias actuar en
tiempo real y reaccionar en línea a los cambios en los entornos dinámicos.
Es importante señalar que todos los métodos de la tesis han sido validados mediante extensas
simulaciones y experimentos de campo. Además, todos los componentes del software se han
desarrollado como código abierto
Deep Reinforcement Learning with semi-expert distillation for autonomous UAV cinematography
Unmanned Aerial Vehicles (UAVs, or drones) have revolutionized modern media production. Being rapidly deployable “flying cameras”, they can easily capture aesthetically pleasing aerial footage of static or moving filming targets/subjects. Current approaches rely either on manual UAV/gimbal control by human experts or on a combination of complex computer vision algorithms and hardware configurations for automating the flight+flying process. This paper explores an efficient Deep Reinforcement Learning (DRL) alternative, which implicitly merges the target detection and path planning steps into a single algorithm. To achieve this, a baseline DRL approach is augmented with a novel policy distillation component, which transfers knowledge from a suitable, semi-expert Model Predictive Control (MPC) controller into the DRL agent. Thus, the latter is able to autonomously execute a specific UAV cinematography task with purely visual input. Unlike the MPC controller, the proposed DRL agent does not need to know the 3D world position of the filming target during inference. Experiments conducted in a photorealistic simulator showcase superior performance and training speed compared to the baseline agent while surpassing the MPC controller in terms of visual occlusion avoidance
Multi-Robot Systems: Challenges, Trends and Applications
This book is a printed edition of the Special Issue entitled “Multi-Robot Systems: Challenges, Trends, and Applications” that was published in Applied Sciences. This Special Issue collected seventeen high-quality papers that discuss the main challenges of multi-robot systems, present the trends to address these issues, and report various relevant applications. Some of the topics addressed by these papers are robot swarms, mission planning, robot teaming, machine learning, immersive technologies, search and rescue, and social robotics
Control of constraint weights for an autonomous camera
Constraint satisfaction based techniques for camera control has the flexibility to add new constraints easily to increase the quality of a shot. We address the problem of deducing and adjusting constraint weights at run time to guide the movement of the camera in an informed and controlled way in response to the requirement of the shot. This enables the control of weights at the frame level. We analyze the mathematical representation of the cost structure of the domain of constraint search so that the constraint solver can search the domain efficiently. We start with a simple tracking shot of a single target. The cost structure of the domain of search suggests the use of a binary search which searches along a curve for 2D and on a surface for 3D by utilizing the information about the cost structure. The problems of occlusion and collision avoidance have also been addressed
On-board Obstacle Avoidance in the Teleoperation of Unmanned Aerial Vehicles
Teleoperation von Drohnen in Umgebungen ohne GPS-Verbindung und wenig Bewegungsspielraum stellt den Operator vor besondere Herausforderungen. Hindernisse in einer unbekannten Umgebung erfordern eine zuverlässige Zustandsschätzung und Algorithmen zur Vermeidung von Kollisionen.
In dieser Dissertation präsentieren wir ein System zur kollisionsfreien Navigation einer ferngesteuerten Drohne mit vier Propellern (Quadcopter) in abgeschlossenen Räumen.
Die Plattform ist mit einem Miniaturcomputer und dem Minimum an Sensoren ausgestattet.
Diese Ausstattung genügt den Anforderungen an die Rechenleistung. Dieses Setup ermöglicht des Weiteren eine hochgenaue Zustandsschätzung mit Hilfe einer Kaskaden-Architektur, sehr gutes Folgeverhalten bezüglich der kommandierten Geschwindigkeit, sowie eine kollisionsfreie Navigation.
Ein Komplementärfilter berechnet die Höhe der Drohne, während ein Kalman-Filter Beschleunigung durch eine IMU und Messungen eines Optical-Flow Sensors fusioniert und in die Softwarearchitektur integriert.
Eine RGB-D Kamera stellt dem Operator ein visuelles Feedback, sowie Distanzmessungen zur Verfügung, um ein Roboter-zentriertes Modell umliegender Hindernisse mit Hilfe eines Bin-Occupancy-Filters zu erstellen.
Der Algorithmus speichert die Position dieser Hindernisse, auch wenn sie das Sehfeld des Sensors verlassen, mit Hilfe des geschätzten Zustandes des Roboters.
Das Prinzip des Ausweich-Algorithmus basiert auf dem Ansatz einer modell-prädiktiven Regelung.
Durch Vorhersage der wahrscheinlichen Position eines Hindernisses werden die durch den Operator kommandierten Sollwerte gefiltert, um eine mögliche Kollision mit einem Hindernis zu vermeiden.
Die Plattform wurde experimentell sowohl in einer räumlich abgeschlossenen Umgebung mit zahlreichen Hindernissen als auch bei Testflügen in offener Umgebung mit natürlichen Hindernissen wie z.B. Bäume getestet.
Fliegende Roboter bergen das Risiko, im Fall eines Fehlers, sei es ein Bedienungs- oder Berechnungsfehler, durch einen Aufprall am Boden oder an Hindernissen Schaden zu nehmen.
Aus diesem Grund nimmt die Entwicklung von Algorithmen dieser Roboter ein hohes Maß an Zeit und Ressourcen in Anspruch.
In dieser Arbeit präsentieren wir zwei Methoden (Software-in-the-loop- und Hardware-in-the-loop-Simulation) um den Entwicklungsprozess zu vereinfachen.
Via Software-in-the-loop-Simulation konnte der Zustandsschätzer mit Hilfe simulierter Sensoren und zuvor aufgenommener Datensätze verbessert werden.
Eine Hardware-in-the-loop Simulation ermöglichte uns, den Roboter in Gazebo (ein bekannter frei verfügbarer ROS-Simulator) mit zusätzlicher auf dem Roboter installierter Hardware in Simulation zu bewegen.
Ebenso können wir damit die Echtzeitfähigkeit der Algorithmen direkt auf der Hardware validieren und verifizieren.
Zu guter Letzt analysierten wir den Einfluss der Roboterbewegung auf das visuelle Feedback des Operators.
Obwohl einige Drohnen die Möglichkeit einer mechanischen Stabilisierung der Kamera besitzen, können unsere Drohnen aufgrund von Gewichtsbeschränkungen nicht auf diese Unterstützung zurückgreifen.
Eine Fixierung der Kamera verursacht, während der Roboter sich bewegt, oft unstetige Bewegungen des Bildes und beeinträchtigt damit negativ die Manövrierbarkeit des Roboters.
Viele wissenschaftliche Arbeiten beschäftigen sich mit der Lösung dieses Problems durch Feature-Tracking.
Damit kann die Bewegung der Kamera rekonstruiert und das Videosignal stabilisiert werden. Wir zeigen, dass diese Methode stark vereinfacht werden kann, durch die Verwendung der Roboter-internen IMU.
Unsere Ergebnisse belegen, dass unser Algorithmus das Kamerabild erfolgreich stabilisieren und der rechnerische Aufwand deutlich reduziert werden kann.
Ebenso präsentieren wir ein neues Design eines Quadcopters, um dessen Ausrichtung von der lateralen Bewegung zu entkoppeln.
Unser Konzept erlaubt die Neigung der Propellerblätter unabhängig von der Ausrichtung des Roboters mit Hilfe zweier zusätzlicher Aktuatoren.
Nachdem wir das dynamische Modell dieses Systems hergeleitet haben, synthetisierten wir einen auf Feedback-Linearisierung basierten Regler. Simulationen bestätigen unsere Überlegungen und heben die Verbesserung der Manövrierfähigkeit dieses neuartigen Designs hervor.The teleoperation of unmanned aerial vehicles (UAVs), especially in cramped, GPS-restricted, environments, poses many challenges. The presence of obstacles in an unfamiliar environment requires reliable state estimation and active algorithms to prevent collisions.
In this dissertation, we present a collision-free indoor navigation system for a teleoperated quadrotor UAV. The platform is equipped with an on-board miniature computer and a minimal set of sensors for this task and is self-sufficient with respect to external tracking systems and computation. The platform is capable of highly accurate state-estimation, tracking of the velocity commanded by the user and collision-free navigation. The robot estimates its state in a cascade architecture. The attitude of the platform is calculated with a complementary filter and its linear velocity through a Kalman filter integration of inertial and optical flow measurements.
An RGB-D camera serves the purpose of providing visual feedback to the operator and depth measurements to build a probabilistic, robot-centric obstacle state with a bin-occupancy filter. The algorithm tracks the obstacles when they leave the field of view of the sensor by updating their positions with the estimate of the robot's motion. The avoidance part of our navigation system is based on the Model Predictive Control approach. By predicting the possible future obstacles states, the UAV filters the operator commands by altering them to prevent collisions. Experiments in obstacle-rich indoor and outdoor environments validate the efficiency of the proposed setup.
Flying robots are highly prone to damage in cases of control errors, as these most likely will cause them to fall to the ground. Therefore, the development of algorithm for UAVs entails considerable amount of time and resources. In this dissertation we present two simulation methods, i.e. software- and hardware-in-the-loop simulations, to facilitate this process. The software-in-the-loop testing was used for the development and tuning of the state estimator for our robot using both the simulated sensors and pre-recorded datasets of sensor measurements, e.g., from real robotic experiments. With hardware-in-the-loop simulations, we are able to command the robot simulated in Gazebo, a popular open source ROS-enabled physical simulator, using computational units that are embedded on our quadrotor UAVs. Hence, we can test in simulation not only the correct execution of algorithms, but also the computational feasibility directly on the robot's hardware.
Lastly, we analyze the influence of the robot's motion on the visual feedback provided to the operator. While some UAVs have the capacity to carry mechanically stabilized camera equipment, weight limits or other problems may make mechanical stabilization impractical. With a fixed camera, the video stream is often unsteady due to the multirotor's movement and can impair the operator's situation awareness. There has been significant research on how to stabilize videos using feature tracking to determine camera movement, which in turn is used to manipulate frames and stabilize the camera stream. However, we believe that this process could be greatly simplified by using data from a UAV’s on-board inertial measurement unit to stabilize the camera feed. Our results show that our algorithm successfully stabilizes the camera stream with the added benefit of requiring less computational power. We also propose a novel quadrotor design concept to decouple its orientation from the lateral motion of the quadrotor. In our design the tilt angles of the propellers with respect to the quadrotor body are being simultaneously controlled with two additional actuators by employing the parallelogram principle. After deriving the dynamic model of this design, we propose a controller for this platform based on feedback linearization. Simulation results confirm our theoretical findings, highlighting the improved motion capabilities of this novel design with respect to standard quadrotors
Multi-agent Collision Avoidance Using Interval Analysis and Symbolic Modelling with its Application to the Novel Polycopter
Coordination is fundamental component of autonomy when a system is defined by multiple mobile agents. For unmanned aerial systems (UAS), challenges originate from their low-level systems, such as their flight dynamics, which are often complex. The thesis begins by examining these low-level dynamics in an analysis of several well known UAS using a novel symbolic component-based framework. It is shown how this approach is used effectively to define key model and performance properties necessary of UAS trajectory control. This is demonstrated initially under the context of linear quadratic regulation (LQR) and model predictive control (MPC) of a quadcopter.
The symbolic framework is later extended in the proposal of a novel UAS platform, referred to as the ``Polycopter" for its morphing nature. This dual-tilt axis system has unique authority over is thrust vector, in addition to an ability to actively augment its stability and aerodynamic characteristics. This presents several opportunities in exploitative control design.
With an approach to low-level UAS modelling and control proposed, the focus of the thesis shifts to investigate the challenges associated with local trajectory generation for the purpose of multi-agent collision avoidance. This begins with a novel survey of the state-of-the-art geometric approaches with respect to performance, scalability and tolerance to uncertainty. From this survey, the interval avoidance (IA) method is proposed, to incorporate trajectory uncertainty in the geometric derivation of escape trajectories. The method is shown to be more effective in ensuring safe separation in several of the presented conditions, however performance is shown to deteriorate in denser conflicts.
Finally, it is shown how by re-framing the IA problem, three dimensional (3D) collision avoidance is achieved. The novel 3D IA method is shown to out perform the original method in three conflict cases by maintaining separation under the effects of uncertainty and in scenarios with multiple obstacles. The performance, scalability and uncertainty tolerance of each presented method is then examined in a set of scenarios resembling typical coordinated UAS operations in an exhaustive Monte-Carlo analysis
System Architectures for Cooperative Teams of Unmanned Aerial Vehicles Interacting Physically with the Environment
Unmanned Aerial Vehicles (UAVs) have become quite a useful tool for a wide range of
applications, from inspection & maintenance to search & rescue, among others. The
capabilities of a single UAV can be extended or complemented by the deployment
of more UAVs, so multi-UAV cooperative teams are becoming a trend. In that case,
as di erent autopilots, heterogeneous platforms, and application-dependent software
components have to be integrated, multi-UAV system architectures that are fexible
and can adapt to the team's needs are required.
In this thesis, we develop system architectures for cooperative teams of UAVs,
paying special attention to applications that require physical interaction with the
environment, which is typically unstructured. First, we implement some layers to
abstract the high-level components from the hardware speci cs. Then we propose
increasingly advanced architectures, from a single-UAV hierarchical navigation architecture
to an architecture for a cooperative team of heterogeneous UAVs. All
this work has been thoroughly tested in both simulation and eld experiments in
di erent challenging scenarios through research projects and robotics competitions.
Most of the applications required physical interaction with the environment, mainly
in unstructured outdoors scenarios. All the know-how and lessons learned throughout
the process are shared in this thesis, and all relevant code is publicly available.Los vehículos aéreos no tripulados (UAVs, del inglés Unmanned Aerial Vehicles) se han
convertido en herramientas muy valiosas para un amplio espectro de aplicaciones, como
inspección y mantenimiento, u operaciones de rescate, entre otras. Las capacidades de un
único UAV pueden verse extendidas o complementadas al utilizar varios de estos vehículos
simultáneamente, por lo que la tendencia actual es el uso de equipos cooperativos con
múltiples UAVs. Para ello, es fundamental la integración de diferentes autopilotos,
plataformas heterogéneas, y componentes software -que dependen de la aplicación-, por lo
que se requieren arquitecturas multi-UAV que sean flexibles y adaptables a las necesidades
del equipo.
En esta tesis, se desarrollan arquitecturas para equipos cooperativos de UAVs, prestando
una especial atención a aplicaciones que requieran de interacción física con el entorno,
cuya naturaleza es típicamente no estructurada. Primero se proponen capas para abstraer a
los componentes de alto nivel de las particularidades del hardware. Luego se desarrollan
arquitecturas cada vez más avanzadas, desde una arquitectura de navegación para un
único UAV, hasta una para un equipo cooperativo de UAVs heterogéneos. Todo el trabajo ha
sido minuciosamente probado, tanto en simulación como en experimentos reales, en
diferentes y complejos escenarios motivados por proyectos de investigación y
competiciones de robótica. En la mayoría de las aplicaciones se requería de interacción
física con el entorno, que es normalmente un escenario en exteriores no estructurado. A lo
largo de la tesis, se comparten todo el conocimiento adquirido y las lecciones aprendidas en
el proceso, y el código relevante está publicado como open-source
- …