754 research outputs found

    Towards Adaptive, Self-Configuring Networked Unmanned Aerial Vehicles

    Get PDF
    Networked drones have the potential to transform various applications domains; yet their adoption particularly in indoor and forest environments has been stymied by the lack of accurate maps and autonomous navigation abilities in the absence of GPS, the lack of highly reliable, energy-efficient wireless communications, and the challenges of visually inferring and understanding an environment with resource-limited individual drones. We advocate a novel vision for the research community in the development of distributed, localized algorithms that enable the networked drones to dynamically coordinate to perform adaptive beam forming to achieve high capacity directional aerial communications, and collaborative machine learning to simultaneously localize, map and visually infer the challenging environment, even when individual drones are resource-limited in terms of computation and communication due to payload restrictions

    Navigation and Guidance for Autonomous Quadcopter Drones Using Deep Learning on Indoor Corridors

    Get PDF
    Autonomous drones require accurate navigation and localization algorithms to carry out their duties. Outdoors drones can utilize GPS for navigation and localization systems. However, GPS is often unreliable or not available at all indoors. Therefore, in this research, an autonomous indoor drone navigation model was created using a deep learning algorithm, to assist drone navigation automatically, especially in indoor corridor areas. In this research, only the Caddx Ratel 2 FPV camera mounted on the drone was used as an input for the deep learning model to navigate the drone forward without a collision with the wall in the corridor. This research produces two deep learning models, namely, a rotational model to overcome a drone's orientation deviations with a loss of 0.0010 and a mean squared error of 0.0009, and a translation model to overcome a drone's translation deviation with a loss of 0.0140 and a mean squared error of 0.011. The implementation of the two models on autonomous drones reaches an NCR value of 0.2. The conclusion from the results obtained in this research is that the difference in resolution and FOV value in the actual image captured by the FPV camera on the drone with the image used for training the deep learning model results in a discrepancy in the output value during the implementation of the deep learning model on autonomous drones and produces low NCR implementation values

    Learning to Fly by Crashing

    Full text link
    How do you learn to navigate an Unmanned Aerial Vehicle (UAV) and avoid obstacles? One approach is to use a small dataset collected by human experts: however, high capacity learning algorithms tend to overfit when trained with little data. An alternative is to use simulation. But the gap between simulation and real world remains large especially for perception problems. The reason most research avoids using large-scale real data is the fear of crashes! In this paper, we propose to bite the bullet and collect a dataset of crashes itself! We build a drone whose sole purpose is to crash into objects: it samples naive trajectories and crashes into random objects. We crash our drone 11,500 times to create one of the biggest UAV crash dataset. This dataset captures the different ways in which a UAV can crash. We use all this negative flying data in conjunction with positive data sampled from the same trajectories to learn a simple yet powerful policy for UAV navigation. We show that this simple self-supervised model is quite effective in navigating the UAV even in extremely cluttered environments with dynamic obstacles including humans. For supplementary video see: https://youtu.be/u151hJaGKU

    An Explicit Method for Fast Monocular Depth Recovery in Corridor Environments

    Full text link
    Monocular cameras are extensively employed in indoor robotics, but their performance is limited in visual odometry, depth estimation, and related applications due to the absence of scale information.Depth estimation refers to the process of estimating a dense depth map from the corresponding input image, existing researchers mostly address this issue through deep learning-based approaches, yet their inference speed is slow, leading to poor real-time capabilities. To tackle this challenge, we propose an explicit method for rapid monocular depth recovery specifically designed for corridor environments, leveraging the principles of nonlinear optimization. We adopt the virtual camera assumption to make full use of the prior geometric features of the scene. The depth estimation problem is transformed into an optimization problem by minimizing the geometric residual. Furthermore, a novel depth plane construction technique is introduced to categorize spatial points based on their possible depths, facilitating swift depth estimation in enclosed structural scenarios, such as corridors. We also propose a new corridor dataset, named Corr\_EH\_z, which contains images as captured by the UGV camera of a variety of corridors. An exhaustive set of experiments in different corridors reveal the efficacy of the proposed algorithm.Comment: 10 pages, 8 figures. arXiv admin note: text overlap with arXiv:2111.08600 by other author

    Enabling Multi-LiDAR Sensing in GNSS-Denied Environments: SLAM Dataset, Benchmark, and UAV Tracking with LiDAR-as-a-camera

    Get PDF
    The rise of Light Detection and Ranging (LiDAR) sensors has profoundly impacted industries ranging from automotive to urban planning. As these sensors become increasingly affordable and compact, their applications are diversifying, driving precision, and innovation. This thesis delves into LiDAR's advancements in autonomous robotic systems, with a focus on its role in simultaneous localization and mapping (SLAM) methodologies and LiDAR as a camera-based tracking for Unmanned Aerial Vehicles (UAV). Our contributions span two primary domains: the Multi-Modal LiDAR SLAM Benchmark, and the LiDAR-as-a-camera UAV Tracking. In the former, we have expanded our previous multi-modal LiDAR dataset by adding more data sequences from various scenarios. In contrast to the previous dataset, we employ different ground truth-generating approaches. We propose a new multi-modal multi-lidar SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. Additionally, we also supplement our data with new open road sequences with GNSS-RTK. This enriched dataset, supported by high-resolution LiDAR, provides detailed insights through an evaluation of ten configurations, pairing diverse LiDAR sensors with state-of-the-art SLAM algorithms. In the latter contribution, we leverage a custom YOLOv5 model trained on panoramic low-resolution images from LiDAR reflectivity (LiDAR-as-a-camera) to detect UAVs, demonstrating the superiority of this approach over point cloud or image-only methods. Additionally, we evaluated the real-time performance of our approach on the Nvidia Jetson Nano, a popular mobile computing platform. Overall, our research underscores the transformative potential of integrating advanced LiDAR sensors with autonomous robotics. By bridging the gaps between different technological approaches, we pave the way for more versatile and efficient applications in the future

    Correlation Flow: Robust Optical Flow Using Kernel Cross-Correlators

    Full text link
    Robust velocity and position estimation is crucial for autonomous robot navigation. The optical flow based methods for autonomous navigation have been receiving increasing attentions in tandem with the development of micro unmanned aerial vehicles. This paper proposes a kernel cross-correlator (KCC) based algorithm to determine optical flow using a monocular camera, which is named as correlation flow (CF). Correlation flow is able to provide reliable and accurate velocity estimation and is robust to motion blur. In addition, it can also estimate the altitude velocity and yaw rate, which are not available by traditional methods. Autonomous flight tests on a quadcopter show that correlation flow can provide robust trajectory estimation with very low processing power. The source codes are released based on the ROS framework.Comment: 2018 International Conference on Robotics and Automation (ICRA 2018

    Vision-Based Monocular SLAM in Micro Aerial Vehicle

    Get PDF
    Micro Aerial Vehicles (MAVs) are popular for their efficiency, agility, and lightweights. They can navigate in dynamic environments that cannot be accessed by humans or traditional aircraft. These MAVs rely on GPS and it will be difficult for GPS-denied areas where it is obstructed by buildings and other obstacles.  Simultaneous Localization and Mapping (SLAM) in an unknown environment can solve the aforementioned problems faced by flying robots.  A rotation and scale invariant visual-based solution, oriented fast and rotated brief (ORB-SLAM) is one of the best solutions for localization and mapping using monocular vision.  In this paper, an ORB-SLAM3 has been used to carry out the research on localizing micro-aerial vehicle Tello and mapping an unknown environment.  The effectiveness of ORB-SLAM3 was tested in a variety of indoor environments.   An integrated adaptive controller was used for an autonomous flight that used the 3D map, produced by ORB-SLAM3 and our proposed novel technique for robust initialization of the SLAM system during flight.  The results show that ORB-SLAM3 can provide accurate localization and mapping for flying robots, even in challenging scenarios with fast motion, large camera movements, and dynamic environments.  Furthermore, our results show that the proposed system is capable of navigating and mapping challenging indoor situations
    • …
    corecore