12 research outputs found

    Vision based estimation, localization, and mapping for autonomous vehicles

    Get PDF
    In this dissertation, we focus on developing simultaneous localization and mapping (SLAM) algorithms with a robot-centric estimation framework primarily using monocular vision sensors. A primary contribution of this work is to use a robot-centric mapping framework concurrently with a world-centric localization method. We exploit the differential equation of motion of the normalized pixel coordinates of each point feature in the robot body frame. Another contribution of our work is to exploit a multiple-view geometry formulation with initial and current view projection of point features. We extract the features from objects surrounding the river and their reflections. The correspondences of the features are used along with the attitude and altitude information of the robot. We demonstrate that the observability of the estimation system is improved by applying our robot-centric mapping framework and multiple-view measurements. Using the robot-centric mapping framework and multiple-view measurements including reflection of features, we present a vision based localization and mapping algorithm that we developed for an unmanned aerial vehicle (UAV) flying in a riverine environment. Our algorithm estimates the 3D positions of point features along a river and the pose of the UAV. Our UAV is equipped with a lightweight monocular camera, an inertial measurement unit (IMU), a magnetometer, an altimeter, and an onboard computer. To our knowledge, we report the first result that exploits the reflections of features in a riverine environment for localization and mapping. We also present an omnidirectional vision based localization and mapping system for a lawn mowing robot. Our algorithm can detect whether the robotic mower is contained in a permitted area. Our robotic mower is modified with an omnidirectional camera, an IMU, a magnetometer, and a vehicle speed sensor. Here, we also exploit the robot-centric mapping framework. The estimator in our system generates a 3D point based map with landmarks. Concurrently, the estimator defines a boundary of the mowing area by using the estimated trajectory of the mower. The estimated boundary and the landmark map are provided for the estimation of the mowing location and for the containment detection. First, we derive a nonlinear observer with contraction analysis and pseudo-measurements of the depth of each landmark to prevent the map estimator from diverging. Of particular interest for this work is ensuring that the estimator for localization and mapping will not fail due to the nonlinearity of the system model. For batch estimation, we design a hybrid extended Kalman smoother for our localization and robot-centric mapping model. Finally, we present a single camera based SLAM algorithm using a convex optimization based nonlinear estimator. We validate the effectiveness of our algorithms through numerical simulations and outdoor experiments

    Dynamic Reconfiguration in Camera Networks: A Short Survey

    Get PDF
    There is a clear trend in camera networks towards enhanced functionality and flexibility, and a fixed static deployment is typically not sufficient to fulfill these increased requirements. Dynamic network reconfiguration helps to optimize the network performance to the currently required specific tasks while considering the available resources. Although several reconfiguration methods have been recently proposed, e.g., for maximizing the global scene coverage or maximizing the image quality of specific targets, there is a lack of a general framework highlighting the key components shared by all these systems. In this paper we propose a reference framework for network reconfiguration and present a short survey of some of the most relevant state-of-the-art works in this field, showing how they can be reformulated in our framework. Finally we discuss the main open research challenges in camera network reconfiguration

    Skyline-based localisation for aggressively manoeuvring robots using UV sensors and spherical harmonics

    Get PDF
    Place recognition is a key capability for navigating robots. While significant advances have been achieved on large, stable platforms such as robot cars, achieving robust performance on rapidly manoeuvring platforms in outdoor natural conditions remains a challenge, with few systems able to deal with both variable conditions and large tilt variations caused by rough terrain. Taking inspiration from biology, we propose a novel combination of sensory modality and image processing to obtain a significant improvement in the robustness of sequence-based image matching for place recognition. We use a UV-sensitive fisheye lens camera to segment sky from ground, providing illumination invariance, and encode the resulting binary images using spherical harmonics to enable rotation-invariant image matching. In combination, these methods also produce substantial pitch and roll invariance, as the spherical harmonics for the sky shape are minimally affected, providing the sky remains visible. We evaluate the performance of our method against a leading appearance-invariant technique (SeqSLAM) and a leading viewpoint-invariant technique (FAB-MAP 2.0) on three new outdoor datasets encompassing variable robot heading, tilt, and lighting conditions in both forested and urban environments. The system demonstrates improved condition- and tilt-invariance, enabling robust place recognition during aggressive zigzag manoeuvring along bumpy trails and at tilt angles of up to 60 degrees

    Holistic methods for visual navigation of mobile robots in outdoor environments

    Get PDF
    Differt D. Holistic methods for visual navigation of mobile robots in outdoor environments. Bielefeld: Universität Bielefeld; 2017

    Mechanisms of place recognition and path integration based on the insect visual system

    Get PDF
    Animals are often able to solve complex navigational tasks in very challenging terrain, despite using low resolution sensors and minimal computational power, providing inspiration for robots. In particular, many species of insect are known to solve complex navigation problems, often combining an array of different behaviours (Wehner et al., 1996; Collett, 1996). Their nervous system is also comparatively simple, relative to that of mammals and other vertebrates. In the first part of this thesis, the visual input of a navigating desert ant, Cataglyphis velox, was mimicked by capturing images in ultraviolet (UV) at similar wavelengths to the ant’s compound eye. The natural segmentation of ground and sky lead to the hypothesis that skyline contours could be used by ants as features for navigation. As proof of concept, sky-segmented binary images were used as input for an established localisation algorithm SeqSLAM (Milford and Wyeth, 2012), validating the plausibility of this claim (Stone et al., 2014). A follow-up investigation sought to determine whether using the sky as a feature would help overcome image matching problems that the ant often faced, such as variance in tilt and yaw rotation. A robotic localisation study showed that using spherical harmonics (SH), a representation in the frequency domain, combined with extracted sky can greatly help robots localise on uneven terrain. Results showed improved performance to state of the art point feature localisation methods on fast bumpy tracks (Stone et al., 2016a). In the second part, an approach to understand how insects perform a navigational task called path integration was attempted by modelling part of the brain of the sweat bee Megalopta genalis. A recent discovery that two populations of cells act as a celestial compass and visual odometer, respectively, led to the hypothesis that circuitry at their point of convergence in the central complex (CX) could give rise to path integration. A firing rate-based model was developed with connectivity derived from the overlap of observed neural arborisations of individual cells and successfully used to build up a home vector and steer an agent back to the nest (Stone et al., 2016b). This approach has the appeal that neural circuitry is highly conserved across insects, so findings here could have wide implications for insect navigation in general. The developed model is the first functioning path integrator that is based on individual cellular connections

    Safe navigation and motion coordination control strategies for unmanned aerial vehicles

    Full text link
    Unmanned aerial vehicles (UAVs) have become very popular for many military and civilian applications including in agriculture, construction, mining, environmental monitoring, etc. A desirable feature for UAVs is the ability to navigate and perform tasks autonomously with least human interaction. This is a very challenging problem due to several factors such as the high complexity of UAV applications, operation in harsh environments, limited payload and onboard computing power and highly nonlinear dynamics. Therefore, more research is still needed towards developing advanced reliable control strategies for UAVs to enable safe navigation in unknown and dynamic environments. This problem is even more challenging for multi-UAV systems where it is more efficient to utilize information shared among the networked vehicles. Therefore, the work presented in this thesis contributes towards the state-of-the-art in UAV control for safe autonomous navigation and motion coordination of multi-UAV systems. The first part of this thesis deals with single-UAV systems. Initially, a hybrid navigation framework is developed for autonomous mobile robots using a general 2D nonholonomic unicycle model that can be applied to different types of UAVs, ground vehicles and underwater vehicles considering only lateral motion. Then, the more complex problem of three-dimensional (3D) collision-free navigation in unknown/dynamic environments is addressed. To that end, advanced 3D reactive control strategies are developed adopting the sense-and-avoid paradigm to produce quick reactions around obstacles. A special case of navigation in 3D unknown confined environments (i.e. tunnel-like) is also addressed. General 3D kinematic models are considered in the design which makes these methods applicable to different UAV types in addition to underwater vehicles. Moreover, different implementation methods for these strategies with quadrotor-type UAVs are also investigated considering UAV dynamics in the control design. Practical experiments and simulations were carried out to analyze the performance of the developed methods. The second part of this thesis addresses safe navigation for multi-UAV systems. Distributed motion coordination methods of multi-UAV systems for flocking and 3D area coverage are developed. These methods offer good computational cost for large-scale systems. Simulations were performed to verify the performance of these methods considering systems with different sizes

    Fourth Annual Workshop on Space Operations Applications and Research (SOAR 90)

    Get PDF
    The proceedings of the SOAR workshop are presented. The technical areas included are as follows: Automation and Robotics; Environmental Interactions; Human Factors; Intelligent Systems; and Life Sciences. NASA and Air Force programmatic overviews and panel sessions were also held in each technical area

    Omnidirectional-vision-based estimation for containment detection of a robotic mower

    No full text
    In this paper, we present an omnidirectional-vision-based localization and mapping system which can detect whether a robotic mower is contained in a permitted area. We exploit a robot-centric mapping framework that exploits a differential equation of motion of the landmarks, which are referenced with respect to the robot body frame. The estimator in our system generates a 3D point-based map with landmarks. Concurrently, the estimator defines a boundary of the mowing area with the estimated trajectory of the mower. The estimated boundary and the landmark map are provided for the estimation of the mowing location and for the containment detection. We validate the effectiveness of our system through numerical simulations and present the results of the outdoor experiment that we conducted with our robotic mower

    Omnidirectional-vision-based estimation for containment detection of a robotic mower

    No full text
    In this paper, we present an omnidirectional-vision-based localization and mapping system which can detect whether a robotic mower is contained in a permitted area. We exploit a robot-centric mapping framework that exploits a differential equation of motion of the landmarks, which are referenced with respect to the robot body frame. The estimator in our system generates a 3D point-based map with landmarks. Concurrently, the estimator defines a boundary of the mowing area with the estimated trajectory of the mower. The estimated boundary and the landmark map are provided for the estimation of the mowing location and for the containment detection. We validate the effectiveness of our system through numerical simulations and present the results of the outdoor experiment that we conducted with our robotic mower
    corecore