135 research outputs found
Featureless visual processing for SLAM in changing outdoor environments
Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features
underwater SLAM: Challenges, state of the art, algorithms and a new biologically-inspired approach
Abstract-The unstructured scenario, the extraction of significant features, the imprecision of sensors along with the impossibility of using GPS signals are some of the challenges encountered in underwater environments. Given this adverse context, the Simultaneous Localization and Mapping techniques (SLAM) attempt to localize the robot in an efficient way in an unknown underwater environment while, at the same time, generate a representative model of the environment. In this paper, we focus on key topics related to SLAM applications in underwater environments. Moreover, a review of major studies in the literature and proposed solutions for addressing the problem are presented. Given the limitations of probabilistic approaches, a new alternative based on a bio-inspired model is highlighted
2D Visual Place Recognition for Domestic Service Robots at Night
Domestic service robots such as lawn mowing and vacuum cleaning robots are
the most numerous consumer robots in existence today. While early versions
employed random exploration, recent systems fielded by most of the major
manufacturers have utilized range-based and visual sensors and user-placed
beacons to enable robots to map and localize. However, active range and visual
sensing solutions have the disadvantages of being intrusive, expensive, or only
providing a 1D scan of the environment, while the requirement for beacon
placement imposes other practical limitations. In this paper we present a
passive and potentially cheap vision-based solution to 2D localization at night
that combines easily obtainable day-time maps with low resolution
contrast-normalized image matching algorithms, image sequence-based matching in
two-dimensions, place match interpolation and recent advances in conventional
low light camera technology. In a range of experiments over a domestic lawn and
in a lounge room, we demonstrate that the proposed approach enables 2D
localization at night, and analyse the effect on performance of varying
odometry noise levels, place match interpolation and sequence matching length.
Finally we benchmark the new low light camera technology and show how it can
enable robust place recognition even in an environment lit only by a moonless
sky, raising the tantalizing possibility of being able to apply all
conventional vision algorithms, even in the darkest of nights
- …