2,386 research outputs found

    Unmanned Aircraft System Navigation in the Urban Environment: A Systems Analysis

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/140665/1/1.I010280.pd

    Featureless visual processing for SLAM in changing outdoor environments

    Get PDF
    Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features

    A contribution to vision-based autonomous helicopter flight in urban environments

    Get PDF
    A navigation strategy that exploits the optic flow and inertial information to continuously avoid collisions with both lateral and frontal obstacles has been used to control a simulated helicopter flying autonomously in a textured urban environment. Experimental results demonstrate that the corresponding controller generates cautious behavior, whereby the helicopter tends to stay in the middle of narrow corridors, while its forward velocity is automatically reduced when the obstacle density increases. When confronted with a frontal obstacle, the controller is also able to generate a tight U-turn that ensures the UAV’s survival. The paper provides comparisons with related work, and discusses the applicability of the approach to real platforms

    Investigation of Shadow Matching for GNSS Positioning in Urban Canyons

    Get PDF
    All travel behavior of people in urban areas relies on knowing their position. Obtaining position has become increasingly easier thanks to the vast popularity of ‘smart’ mobile devices. The main and most accurate positioning technique used in these devices is global navigation satellite systems (GNSS). However, the poor performance of GNSS user equipment in urban canyons is a well-known problem and it is particularly inaccurate in the cross-street direction. The accuracy in this direction greatly affects many applications, including vehicle lane identification and high-accuracy pedestrian navigation. Shadow matching is a new technique that helps solve this problem by integrating GNSS constellation geometries and information derived from 3D models of buildings. This study brings the shadow matching principle from a simple mathematical model, through experimental proof of concept, system design and demonstration, algorithm redesign, comprehensive experimental tests, real-time demonstration and feasibility assessment, to a workable positioning solution. In this thesis, GNSS performance in urban canyons is numerically evaluated using 3D models. Then, a generic two-phase 6-step shadow matching system is proposed, implemented and tested against both geodetic and smartphone-grade GNSS receivers. A Bayesian technique-based shadow matching is proposed to account for NLOS and diffracted signal reception. A particle filter is designed to enable multi-epoch kinematic positioning. Finally, shadow matching is adapted and implemented as a mobile application (app), with feasibility assessment conducted. Results from the investigation confirm that conventional ranging-based GNSS is not adequate for reliable urban positioning. The designed shadow matching positioning system is demonstrated complementary to conventional GNSS in improving urban positioning accuracy. Each of the three generations of shadow matching algorithm is demonstrated to provide better positioning performance, supported by comprehensive experiments. In summary, shadow matching has been demonstrated to significantly improve urban positioning accuracy; it shows great potential to revolutionize urban positioning from street level to lane level, and possibly meter level

    Sky-GVINS: a Sky-segmentation Aided GNSS-Visual-Inertial System for Robust Navigation in Urban Canyons

    Full text link
    Integrating Global Navigation Satellite Systems (GNSS) in Simultaneous Localization and Mapping (SLAM) systems draws increasing attention to a global and continuous localization solution. Nonetheless, in dense urban environments, GNSS-based SLAM systems will suffer from the Non-Line-Of-Sight (NLOS) measurements, which might lead to a sharp deterioration in localization results. In this paper, we propose to detect the sky area from the up-looking camera to improve GNSS measurement reliability for more accurate position estimation. We present Sky-GVINS: a sky-aware GNSS-Visual-Inertial system based on a recent work called GVINS. Specifically, we adopt a global threshold method to segment the sky regions and non-sky regions in the fish-eye sky-pointing image and then project satellites to the image using the geometric relationship between satellites and the camera. After that, we reject satellites in non-sky regions to eliminate NLOS signals. We investigated various segmentation algorithms for sky detection and found that the Otsu algorithm reported the highest classification rate and computational efficiency, despite the algorithm's simplicity and ease of implementation. To evaluate the effectiveness of Sky-GVINS, we built a ground robot and conducted extensive real-world experiments on campus. Experimental results show that our method improves localization accuracy in both open areas and dense urban environments compared to the baseline method. Finally, we also conduct a detailed analysis and point out possible further directions for future research. For detailed information, visit our project website at https://github.com/SJTU-ViSYS/Sky-GVINS

    Review and classification of vision-based localisation techniques in unknown environments

    Get PDF
    International audienceThis study presents a review of the state-of-the-art and a novel classification of current vision-based localisation techniques in unknown environments. Indeed, because of progresses made in computer vision, it is now possible to consider vision-based systems as promising navigation means that can complement traditional navigation sensors like global navigation satellite systems (GNSSs) and inertial navigation systems. This study aims to review techniques employing a camera as a localisation sensor, provide a classification of techniques and introduce schemes that exploit the use of video information within a multi-sensor system. In fact, a general model is needed to better compare existing techniques in order to decide which approach is appropriate and which are the innovation axes. In addition, existing classifications only consider techniques based on vision as a standalone tool and do not consider video as a sensor among others. The focus is addressed to scenarios where no a priori knowledge of the environment is provided. In fact, these scenarios are the most challenging since the system has to cope with objects as they appear in the scene without any prior information about their expected position
    corecore