5,599 research outputs found

    Fast, Autonomous Flight in GPS-Denied and Cluttered Environments

    Full text link
    One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a-priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution, and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development, and present results from extensive experimental testing in real-world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field Robotic

    Detection and estimation of moving obstacles for a UAV

    Get PDF
    In recent years, research interest in Unmanned Aerial Vehicles (UAVs) has been grown rapidly because of their potential use for a wide range of applications. In this paper, we proposed a vision-based detection and position/velocity estimation of moving obstacle for a UAV. The knowledge of a moving obstacle's state, i.e., position, velocity, is essential to achieve better performance for an intelligent UAV system specially in autonomous navigation and landing tasks. The novelties are: (1) the design and implementation of a localization method using sensor fusion methodology which fuses Inertial Measurement Unit (IMU) signals and Pozyx signals; (2) The development of detection and estimation of moving obstacles method based on on-board vision system. Experimental results validate the effectiveness of the proposed approach. (C) 2019, IFAC (International Federation of Automatic Control) Hosting by Elsevier Ltd. All rights reserved

    State estimation of a cheetah spine and tail using an inertial sensor network

    Get PDF
    The cheetah (Acinonyx jubatus) is by far the most manoeuvrable and agile terrestrial animal. Little is known, in terms of biomechanics, about how it achieves these incredible feats of manoeuvrability. The transient motions of the cheetah all involve rapid flicking of its tail and flexing of its spine. The aim of the research was to develop tools (hardware and software) that can be used to gain a better understanding of the cheetah tail and spine by capturing its motion. A mechanical rig was used to simulate the tail and spine motion. This insight may inspire and aid in the design of bio-inspired robotic platforms. A previous assumption was that the tail is heavy and acts as a counter balance or rudder, yet this was never tested. Contrary to this assumption, necropsy results determined that the tail was in fact light with a relatively low inertia value. Fur from the tail was used in wind tunnel experiments to determine the drag coefficient of a cheetah tail. No researchers have actively sought to track the motion of a cheetah's spine and tail during rapid manoeuvres via placing multiple sensors on a cheetah. This requires the development of a 3D dynamic model of the spine and tail to accurately study the motion of the cheetah. A wireless sensor network was built and three different filters and state estimation algorithms were designed and validated with a mechanical rig and camera system. The sensor network consists of three sensors on the tail (base, middle and tip) as well as a hypothetical collar sensor (GPS and WiFi were not implemented)

    Keyframe-based visual–inertial odometry using nonlinear optimization

    Get PDF
    Combining visual and inertial measurements has become popular in mobile robotics, since the two sensing modalities offer complementary characteristics that make them the ideal choice for accurate visual–inertial odometry or simultaneous localization and mapping (SLAM). While historically the problem has been addressed with filtering, advancements in visual estimation suggest that nonlinear optimization offers superior accuracy, while still tractable in complexity thanks to the sparsity of the underlying problem. Taking inspiration from these findings, we formulate a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms. The problem is kept tractable and thus ensuring real-time operation by limiting the optimization to a bounded window of keyframes through marginalization. Keyframes may be spaced in time by arbitrary intervals, while still related by linearized inertial terms. We present evaluation results on complementary datasets recorded with our custom-built stereo visual–inertial hardware that accurately synchronizes accelerometer and gyroscope measurements with imagery. A comparison of both a stereo and monocular version of our algorithm with and without online extrinsics estimation is shown with respect to ground truth. Furthermore, we compare the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter. This competitive reference implementation performs tightly coupled filtering-based visual–inertial odometry. While our approach declaredly demands more computation, we show its superior performance in terms of accuracy
    • …
    corecore