6,410 research outputs found
Featureless visual processing for SLAM in changing outdoor environments
Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features
Navigation without localisation: reliable teach and repeat based on the convergence theorem
We present a novel concept for teach-and-repeat visual navigation. The
proposed concept is based on a mathematical model, which indicates that in
teach-and-repeat navigation scenarios, mobile robots do not need to perform
explicit localisation. Rather than that, a mobile robot which repeats a
previously taught path can simply `replay' the learned velocities, while using
its camera information only to correct its heading relative to the intended
path. To support our claim, we establish a position error model of a robot,
which traverses a taught path by only correcting its heading. Then, we outline
a mathematical proof which shows that this position error does not diverge over
time. Based on the insights from the model, we present a simple monocular
teach-and-repeat navigation method. The method is computationally efficient, it
does not require camera calibration, and it can learn and autonomously traverse
arbitrarily-shaped paths. In a series of experiments, we demonstrate that the
method can reliably guide mobile robots in realistic indoor and outdoor
conditions, and can cope with imperfect odometry, landmark deficiency,
illumination variations and naturally-occurring environment changes.
Furthermore, we provide the navigation system and the datasets gathered at
http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri
Deep Learning Features at Scale for Visual Place Recognition
The success of deep learning techniques in the computer vision domain has
triggered a range of initial investigations into their utility for visual place
recognition, all using generic features from networks that were trained for
other types of recognition tasks. In this paper, we train, at large scale, two
CNN architectures for the specific place recognition task and employ a
multi-scale feature encoding method to generate condition- and
viewpoint-invariant features. To enable this training to occur, we have
developed a massive Specific PlacEs Dataset (SPED) with hundreds of examples of
place appearance change at thousands of different places, as opposed to the
semantic place type datasets currently available. This new dataset enables us
to set up a training regime that interprets place recognition as a
classification problem. We comprehensively evaluate our trained networks on
several challenging benchmark place recognition datasets and demonstrate that
they achieve an average 10% increase in performance over other place
recognition algorithms and pre-trained CNNs. By analyzing the network responses
and their differences from pre-trained networks, we provide insights into what
a network learns when training for place recognition, and what these results
signify for future research in this area.Comment: 8 pages, 10 figures. Accepted by International Conference on Robotics
and Automation (ICRA) 2017. This is the submitted version. The final
published version may be slightly differen
Panoramic Annular Localizer: Tackling the Variation Challenges of Outdoor Localization Using Panoramic Annular Images and Active Deep Descriptors
Visual localization is an attractive problem that estimates the camera
localization from database images based on the query image. It is a crucial
task for various applications, such as autonomous vehicles, assistive
navigation and augmented reality. The challenging issues of the task lie in
various appearance variations between query and database images, including
illumination variations, dynamic object variations and viewpoint variations. In
order to tackle those challenges, Panoramic Annular Localizer into which
panoramic annular lens and robust deep image descriptors are incorporated is
proposed in this paper. The panoramic annular images captured by the single
camera are processed and fed into the NetVLAD network to form the active deep
descriptor, and sequential matching is utilized to generate the localization
result. The experiments carried on the public datasets and in the field
illustrate the validation of the proposed system.Comment: Accepted by ITSC 201
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
- …