7 research outputs found
Decentralized MPC based Obstacle Avoidance for Multi-Robot Target Tracking Scenarios
In this work, we consider the problem of decentralized multi-robot target
tracking and obstacle avoidance in dynamic environments. Each robot executes a
local motion planning algorithm which is based on model predictive control
(MPC). The planner is designed as a quadratic program, subject to constraints
on robot dynamics and obstacle avoidance. Repulsive potential field functions
are employed to avoid obstacles. The novelty of our approach lies in embedding
these non-linear potential field functions as constraints within a convex
optimization framework. Our method convexifies non-convex constraints and
dependencies, by replacing them as pre-computed external input forces in robot
dynamics. The proposed algorithm additionally incorporates different methods to
avoid field local minima problems associated with using potential field
functions in planning. The motion planner does not enforce predefined
trajectories or any formation geometry on the robots and is a comprehensive
solution for cooperative obstacle avoidance in the context of multi-robot
target tracking. We perform simulation studies in different environmental
scenarios to showcase the convergence and efficacy of the proposed algorithm.
Video of simulation studies: \url{https://youtu.be/umkdm82Tt0M
Optimal Trajectory Planning for Cinematography with Multiple Unmanned Aerial Vehicles
This paper presents a method for planning optimal trajectories with a team of
Unmanned Aerial Vehicles (UAVs) performing autonomous cinematography. The
method is able to plan trajectories online and in a distributed manner,
providing coordination between the UAVs. We propose a novel non-linear
formulation for this challenging problem of computing multi-UAV optimal
trajectories for cinematography; integrating UAVs dynamics and collision
avoidance constraints, together with cinematographic aspects like smoothness,
gimbal mechanical limits and mutual camera visibility. We integrate our method
within a hardware and software architecture for UAV cinematography that was
previously developed within the framework of the MultiDrone project; and
demonstrate its use with different types of shots filming a moving target
outdoors. We provide extensive experimental results both in simulation and
field experiments. We analyze the performance of the method and prove that it
is able to compute online smooth trajectories, reducing jerky movements and
complying with cinematography constraints.Comment: This paper has been published as: Optimal trajectory planning for
cinematography with multiple Unmanned Aerial Vehicles. Alfonso Alcantara and
Jesus Capitan and Rita Cunha and Anibal Ollero. Robotics and Autonomous
Systems. 103778 (2021) 10.1016/j.robot.2021.10377
Autonomous Execution of Cinematographic Shots with Multiple Drones
This paper presents a system for the execution of autonomous cinematography
missions with a team of drones. The system allows media directors to design
missions involving different types of shots with one or multiple cameras,
running sequentially or concurrently. We introduce the complete architecture,
which includes components for mission design, planning and execution. Then, we
focus on the components related to autonomous mission execution. First, we
propose a novel parametric description for shots, considering different types
of camera motion and tracked targets; and we use it to implement a set of
canonical shots. Second, for multi-drone shot execution, we propose distributed
schedulers that activate different shot controllers on board the drones.
Moreover, an event-based mechanism is used to synchronize shot execution among
the drones and to account for inaccuracies during shot planning. Finally, we
showcase the system with field experiments filming sport activities, including
a real regatta event. We report on system integration and lessons learnt during
our experimental campaigns
Deep Neural Network-Based Cooperative Visual Tracking Through Multiple Micro Aerial Vehicles
Multi-camera tracking of humans and animals in outdoor environments is a relevant and challenging problem. Our approach to it involves a team of cooperating micro aerial vehicles (MAVs) with on-board cameras only. Deep neural networks (DNNs) often fail at detecting small-scale objects or those that are far away from the camera, which are typical characteristics of a scenario with aerial robots. Thus, the core problem addressed in this paper is how to achieve on-board, online, continuous and accurate vision-based detections using DNNs for visual person tracking through MAVs. Our solution leverages cooperation among multiple MAVs and active selection of most informative regions of image. We demonstrate the efficiency of our approach through simulations with up to 16 robots and real robot experiments involving two aerial robots tracking a person, while maintaining an active perception-driven formation. ROS-based source code is provided for the benefit of the community