223 research outputs found

    Safe Local Exploration for Replanning in Cluttered Unknown Environments for Micro-Aerial Vehicles

    Full text link
    In order to enable Micro-Aerial Vehicles (MAVs) to assist in complex, unknown, unstructured environments, they must be able to navigate with guaranteed safety, even when faced with a cluttered environment they have no prior knowledge of. While trajectory optimization-based local planners have been shown to perform well in these cases, prior work either does not address how to deal with local minima in the optimization problem, or solves it by using an optimistic global planner. We present a conservative trajectory optimization-based local planner, coupled with a local exploration strategy that selects intermediate goals. We perform extensive simulations to show that this system performs better than the standard approach of using an optimistic global planner, and also outperforms doing a single exploration step when the local planner is stuck. The method is validated through experiments in a variety of highly cluttered environments including a dense forest. These experiments show the complete system running in real time fully onboard an MAV, mapping and replanning at 4 Hz.Comment: Accepted to ICRA 2018 and RA-L 201

    EARL: Eye-on-Hand Reinforcement Learner for Dynamic Grasping with Active Pose Estimation

    Full text link
    In this paper, we explore the dynamic grasping of moving objects through active pose tracking and reinforcement learning for hand-eye coordination systems. Most existing vision-based robotic grasping methods implicitly assume target objects are stationary or moving predictably. Performing grasping of unpredictably moving objects presents a unique set of challenges. For example, a pre-computed robust grasp can become unreachable or unstable as the target object moves, and motion planning must also be adaptive. In this work, we present a new approach, Eye-on-hAnd Reinforcement Learner (EARL), for enabling coupled Eye-on-Hand (EoH) robotic manipulation systems to perform real-time active pose tracking and dynamic grasping of novel objects without explicit motion prediction. EARL readily addresses many thorny issues in automated hand-eye coordination, including fast-tracking of 6D object pose from vision, learning control policy for a robotic arm to track a moving object while keeping the object in the camera's field of view, and performing dynamic grasping. We demonstrate the effectiveness of our approach in extensive experiments validated on multiple commercial robotic arms in both simulations and complex real-world tasks.Comment: Presented on IROS 2023 Corresponding author Siddarth Jai

    Agile Reactive Navigation for A Non-Holonomic Mobile Robot Using A Pixel Processor Array

    Full text link
    This paper presents an agile reactive navigation strategy for driving a non-holonomic ground vehicle around a preset course of gates in a cluttered environment using a low-cost processor array sensor. This enables machine vision tasks to be performed directly upon the sensor's image plane, rather than using a separate general-purpose computer. We demonstrate a small ground vehicle running through or avoiding multiple gates at high speed using minimal computational resources. To achieve this, target tracking algorithms are developed for the Pixel Processing Array and captured images are then processed directly on the vision sensor acquiring target information for controlling the ground vehicle. The algorithm can run at up to 2000 fps outdoors and 200fps at indoor illumination levels. Conducting image processing at the sensor level avoids the bottleneck of image transfer encountered in conventional sensors. The real-time performance of on-board image processing and robustness is validated through experiments. Experimental results demonstrate that the algorithm's ability to enable a ground vehicle to navigate at an average speed of 2.20 m/s for passing through multiple gates and 3.88 m/s for a 'slalom' task in an environment featuring significant visual clutter.Comment: 7 page

    Optimal Multi-UAV Trajectory Planning for Filming Applications

    Get PDF
    Teams of multiple Unmanned Aerial Vehicles (UAVs) can be used to record large-scale outdoor scenarios and complementary views of several action points as a promising system for cinematic video recording. Generating the trajectories of the UAVs plays a key role, as it should be ensured that they comply with requirements for system dynamics, smoothness, and safety. The rise of numerical methods for nonlinear optimization is finding a ourishing field in optimization-based approaches to multi- UAV trajectory planning. In particular, these methods are rather promising for video recording applications, as they enable multiple constraints and objectives to be formulated, such as trajectory smoothness, compliance with UAV and camera dynamics, avoidance of obstacles and inter-UAV con icts, and mutual UAV visibility. The main objective of this thesis is to plan online trajectories for multi-UAV teams in video applications, formulating novel optimization problems and solving them in real time. The thesis begins by presenting a framework for carrying out autonomous cinematography missions with a team of UAVs. This framework enables media directors to design missions involving different types of shots with one or multiple cameras, running sequentially or concurrently. Second, the thesis proposes a novel non-linear formulation for the challenging problem of computing optimal multi-UAV trajectories for cinematography, integrating UAV dynamics and collision avoidance constraints, together with cinematographic aspects such as smoothness, gimbal mechanical limits, and mutual camera visibility. Lastly, the thesis describes a method for autonomous aerial recording with distributed lighting by a team of UAVs. The multi-UAV trajectory optimization problem is decoupled into two steps in order to tackle non-linear cinematographic aspects and obstacle avoidance at separate stages. This allows the trajectory planner to perform in real time and to react online to changes in dynamic environments. It is important to note that all the methods in the thesis have been validated by means of extensive simulations and field experiments. Moreover, all the software components have been developed as open source.Los equipos de vehículos aéreos no tripulados (UAV) son sistemas prometedores para grabar eventos cinematográficos, en escenarios exteriores de grandes dimensiones difíciles de cubrir o para tomar vistas complementarias de diferentes puntos de acción. La generación de trayectorias para este tipo de vehículos desempeña un papel fundamental, ya que debe garantizarse que se cumplan requisitos dinámicos, de suavidad y de seguridad. Los enfoques basados en la optimización para la planificación de trayectorias de múltiples UAVs se pueden ver beneficiados por el auge de los métodos numéricos para la resolución de problemas de optimización no lineales. En particular, estos métodos son bastante prometedores para las aplicaciones de grabación de vídeo, ya que permiten formular múltiples restricciones y objetivos, como la suavidad de la trayectoria, el cumplimiento de la dinámica del UAV y de la cámara, la evitación de obstáculos y de conflictos entre UAVs, y la visibilidad mutua. El objetivo principal de esta tesis es planificar trayectorias para equipos multi-UAV en aplicaciones de vídeo, formulando novedosos problemas de optimización y resolviéndolos en tiempo real. La tesis comienza presentando un marco de trabajo para la realización de misiones cinematográficas autónomas con un equipo de UAVs. Este marco permite a los directores de medios de comunicación diseñar misiones que incluyan diferentes tipos de tomas con una o varias cámaras, ejecutadas de forma secuencial o concurrente. En segundo lugar, la tesis propone una novedosa formulación no lineal para el difícil problema de calcular las trayectorias óptimas de los vehículos aéreos no tripulados en cinematografía, integrando en el problema la dinámica de los UAVs y las restricciones para evitar colisiones, junto con aspectos cinematográficos como la suavidad, los límites mecánicos del cardán y la visibilidad mutua de las cámaras. Por último, la tesis describe un método de grabación aérea autónoma con iluminación distribuida por un equipo de UAVs. El problema de optimización de trayectorias se desacopla en dos pasos para abordar los aspectos cinematográficos no lineales y la evitación de obstáculos en etapas separadas. Esto permite al planificador de trayectorias actuar en tiempo real y reaccionar en línea a los cambios en los entornos dinámicos. Es importante señalar que todos los métodos de la tesis han sido validados mediante extensas simulaciones y experimentos de campo. Además, todos los componentes del software se han desarrollado como código abierto
    corecore