32 research outputs found

    Unified Robust Path Planning and Optimal Trajectory Generation for Efficient 3D Area Coverage of Quadrotor UAVs

    Get PDF
    Area coverage is an important problem in robotics applications, which has been widely used in search and rescue, offshore industrial inspection, and smart agriculture. This paper demonstrates a novel unified robust path planning, optimal trajectory generation, and control architecture for a quadrotor coverage mission. To achieve safe navigation in uncertain working environments containing obstacles, the proposed algorithm applies a modified probabilistic roadmap to generating a connected search graph considering the risk of collision with the obstacles. Furthermore, a recursive node and link generation scheme determines a more efficient search graph without extra complexity to reduce the computational burden during the planning procedure. An optimal three-dimensional trajectory generation is then suggested to connect the optimal discrete path generated by the planning algorithm, and the robust control policy is designed based on the cascade NLH∞ framework. The integrated framework is capable of compensating for the effects of uncertainties and disturbances while accomplishing the area coverage mission. The feasibility, robustness and performance of the proposed framework are evaluated through Monte Carlo simulations, PX4 Software-In-the-Loop test facility, and real-world experiments

    A practical multirobot localization system

    Get PDF
    We present a fast and precise vision-based software intended for multiple robot localization. The core component of the software is a novel and efficient algorithm for black and white pattern detection. The method is robust to variable lighting conditions, achieves sub-pixel precision and its computational complexity is independent of the processed image size. With off-the-shelf computational equipment and low-cost cameras, the core algorithm is able to process hundreds of images per second while tracking hundreds of objects with a millimeter precision. In addition, we present the method's mathematical model, which allows to estimate the expected localization precision, area of coverage, and processing speed from the camera's intrinsic parameters and hardware's processing capacity. The correctness of the presented model and performance of the algorithm in real-world conditions is verified in several experiments. Apart from the method description, we also make its source code public at \emph{http://purl.org/robotics/whycon}; so, it can be used as an enabling technology for various mobile robotic problems

    FPGA-based module for SURF extraction

    Get PDF
    We present a complete hardware and software solution of an FPGA-based computer vision embedded module capable of carrying out SURF image features extraction algorithm. Aside from image analysis, the module embeds a Linux distribution that allows to run programs specifically tailored for particular applications. The module is based on a Virtex-5 FXT FPGA which features powerful configurable logic and an embedded PowerPC processor. We describe the module hardware as well as the custom FPGA image processing cores that implement the algorithm's most computationally expensive process, the interest point detection. The module's overall performance is evaluated and compared to CPU and GPU based solutions. Results show that the embedded module achieves comparable disctinctiveness to the SURF software implementation running in a standard CPU while being faster and consuming significantly less power and space. Thus, it allows to use the SURF algorithm in applications with power and spatial constraints, such as autonomous navigation of small mobile robots

    Spatio-temporal exploration strategies for long-term autonomy of mobile robots

    Get PDF
    We present a study of spatio-temporal environment representations and exploration strategies for long-term deployment of mobile robots in real-world, dynamic environments. We propose a new concept for life-long mobile robot spatio-temporal exploration that aims at building, updating and maintaining the environment model during the long-term deployment. The addition of the temporal dimension to the explored space makes the exploration task a never-ending data-gathering process, which we address by application of information-theoretic exploration techniques to world representations that model the uncertainty of environment states as probabilistic functions of time. We evaluate the performance of different exploration strategies and temporal models on real-world data gathered over the course of several months. The combination of dynamic environment representations with information-gain exploration principles allows to create and maintain up-to-date models of continuously changing environments, enabling efficient and self-improving long-term operation of mobile robots

    Fault-tolerant formation driving mechanism designed for heterogeneous MAVs-UGVs groups

    Get PDF
    A fault-tolerant method for stabilization and navigation of 3D heterogeneous formations is proposed in this paper. The presented Model Predictive Control (MPC) based approach enables to deploy compact formations of closely cooperating autonomous aerial and ground robots in surveillance scenarios without the necessity of a precise external localization. Instead, the proposed method relies on a top-view visual relative localization provided by the micro aerial vehicles flying above the ground robots and on a simple yet stable visual based navigation using images from an onboard monocular camera. The MPC based schema together with a fault detection and recovery mechanism provide a robust solution applicable in complex environments with static and dynamic obstacles. The core of the proposed leader-follower based formation driving method consists in a representation of the entire 3D formation as a convex hull projected along a desired path that has to be followed by the group. Such an approach provides non-collision solution and respects requirements of the direct visibility between the team members. The uninterrupted visibility is crucial for the employed top-view localization and therefore for the stabilization of the group. The proposed formation driving method and the fault recovery mechanisms are verified by simulations and hardware experiments presented in the paper

    Evolutionary optimization for risk-aware heterogeneous multi-agent path planning in uncertain environments

    Get PDF
    Cooperative multi-agent systems make it possible to employ miniature robots in order to perform different experiments for data collection in wide open areas to physical interactions with test subjects in confined environments such as a hive. This paper proposes a new multi-agent path-planning approach to determine a set of trajectories where the agents do not collide with each other or any obstacle. The proposed algorithm leverages a risk-aware probabilistic roadmap algorithm to generate a map, employs node classification to delineate exploration regions, and incorporates a customized genetic framework to address the combinatorial optimization, with the ultimate goal of computing safe trajectories for the team. Furthermore, the proposed planning algorithm makes the agents explore all subdomains in the workspace together as a formation to allow the team to perform different tasks or collect multiple datasets for reliable localization or hazard detection. The objective function for minimization includes two major parts, the traveling distance of all the agents in the entire mission and the probability of collisions between the agents or agents with obstacles. A sampling method is used to determine the objective function considering the agents’ dynamic behavior influenced by environmental disturbances and uncertainties. The algorithm’s performance is evaluated for different group sizes by using a simulation environment, and two different benchmark scenarios are introduced to compare the exploration behavior. The proposed optimization method establishes stable and convergent properties regardless of the group size

    Image features for visual teach-and-repeat navigation in changing environments

    Get PDF
    We present an evaluation of standard image features in the context of long-term visual teach-and-repeat navigation of mobile robots, where the environment exhibits significant changes in appearance caused by seasonal weather variations and daily illumination changes. We argue that for long-term autonomous navigation, the viewpoint-, scale- and rotation- invariance of the standard feature extractors is less important than their robustness to the mid- and long-term environment appearance changes. Therefore, we focus our evaluation on the robustness of image registration to variable lighting and naturally-occurring seasonal changes. We combine detection and description components of different image extractors and evaluate their performance on five datasets collected by mobile vehicles in three different outdoor environments over the course of one year. Moreover, we propose a trainable feature descriptor based on a combination of evolutionary algorithms and Binary Robust Independent Elementary Features, which we call GRIEF (Generated BRIEF). In terms of robustness to seasonal changes, the most promising results were achieved by the SpG/CNN and the STAR/GRIEF feature, which was slightly less robust, but faster to calculate

    An efficient visual fiducial localisation system

    Get PDF
    With use cases that range from external localisation of single robots or robotic swarms to self-localisation in marker-augmented environments and simplifying perception by tagging objects in a robot's surrounding, fiducial markers have a wide field of application in the robotic world. We propose a new family of circular markers which allow for both computationally efficient detection, tracking and identification and full 6D position estimation. At the core of the proposed approach lies the separation of the detection and identification steps, with the former using computationally efficient circular marker detection and the latter utilising an open-ended `necklace encoding', allowing scalability to a large number of individual markers. While the proposed algorithm achieves similar accuracy to other state-of-the-art methods, its experimental evaluation in realistic conditions demonstrates that it can detect markers from larger distances while being up to two orders of magnitude faster than other state-of-the-art fiducial marker detection methods. In addition, the entire system is available as an open-source package at \url{https://github.com/LCAS/whycon}

    Visual teach and generalise (VTAG)—Exploiting perceptual aliasing for scalable autonomous robotic navigation in horticultural environments

    Get PDF
    Nowadays, most agricultural robots rely on precise and expensive localisation, typically based on global navigation satellite systems (GNSS) and real-time kinematic (RTK) receivers. Unfortunately, the precision of GNSS localisation significantly decreases in environments where the signal paths between the receiver and the satellites are obstructed. This precision hampers deployments of these robots in, e.g., polytunnels or forests. An attractive alternative to GNSS is vision-based localisation and navigation. However, perceptual aliasing and landmark deficiency, typical for agricultural environments, cause traditional image processing techniques, such as feature matching, to fail. We propose an approach for an affordable pure vision-based navigation system which is not only robust to perceptual aliasing, but it actually exploits the repetitiveness of agricultural environments. Our system extends the classic concept of visual teach and repeat to visual teach and generalise (VTAG). Our teach and generalise method uses a deep learning-based image registration pipeline to register similar images through meaningful generalised representations obtained from different but similar areas. The proposed system uses only a low-cost uncalibrated monocular camera and the robot’s wheel odometry to produce heading corrections to traverse crop rows in polytunnels safely. We evaluate this method at our test farm and at a commercial farm on three different robotic platforms where an operator teaches only a single crop row. With all platforms, the method successfully navigates the majority of rows with most interventions required at the end of the rows, where the camera no longer has a view of any repeating landmarks such as poles, crop row tables or rows which have visually different features to that of the taught row. For one robot which was taught one row 25 m long our approach autonomously navigated the robot a total distance of over 3.5 km, reaching a teach-generalisation gain of 140
    corecore