166 research outputs found
Minimalistic vision-based cognitive SLAM
The interest in cognitive robotics is still increasing, a major goal being to create a system which can adapt
to dynamic environments and which can learn from its own experiences. We present a new cognitive SLAM
architecture, but one which is minimalistic in terms of sensors and memory. It employs only one camera with
pan and tilt control and three memories, without additional sensors nor any odometry. Short-term memory is
an egocentric map which holds information at close range at the actual robot position. Long-term memory is
used for mapping the environment and registration of encountered objects. Object memory holds features of
learned objects which are used as navigation landmarks and task targets. Saliency maps are used to sequentially
focus important areas for object and obstacle detection, but also for selecting directions of movements.
Reinforcement learning is used to consolidate or enfeeble environmental information in long-term memory.
The system is able to achieve complex tasks by executing sequences of visuomotor actions, decisions being
taken by goal-detection and goal-completion tasks. Experimental results show that the system is capable of
executing tasks like localizing specific objects while building a map, after which it manages to return to the
start position even when new obstacles have appeared
Biologically Inspired Vision for Indoor Robot Navigation
Ultrasonic, infrared, laser and other sensors are being applied
in robotics. Although combinations of these have allowed robots to navigate,
they are only suited for specific scenarios, depending on their limitations.
Recent advances in computer vision are turning cameras into useful
low-cost sensors that can operate in most types of environments. Cameras
enable robots to detect obstacles, recognize objects, obtain visual
odometry, detect and recognize people and gestures, among other possibilities.
In this paper we present a completely biologically inspired vision
system for robot navigation. It comprises stereo vision for obstacle detection,
and object recognition for landmark-based navigation. We employ
a novel keypoint descriptor which codes responses of cortical complex
cells. We also present a biologically inspired saliency component, based
on disparity and colour
Low-Resolution Vision for Autonomous Mobile Robots
The goal of this research is to develop algorithms using low-resolution images to perceive and understand a typical indoor environment and thereby enable a mobile robot to autonomously navigate such an environment. We present techniques for three problems: autonomous exploration, corridor classification, and minimalistic geometric representation of an indoor environment for navigation. First, we present a technique for mobile robot exploration in unknown indoor environments using only a single forward-facing camera. Rather than processing all the data, the method intermittently examines only small 32X24 downsampled grayscale images. We show that for the task of indoor exploration the visual information is highly redundant, allowing successful navigation even using only a small fraction (0.02%) of the available data. The method keeps the robot centered in the corridor by estimating two state parameters: the orientation within the corridor and the distance to the end of the corridor. The orientation is determined by combining the results of five complementary measures, while the estimated distance to the end combines the results of three complementary measures. These measures, which are predominantly information-theoretic, are analyzed independently, and the combined system is tested in several unknown corridor buildings exhibiting a wide variety of appearances, showing the sufficiency of low-resolution visual information for mobile robot exploration. Because the algorithm discards such a large percentage (99.98%) of the information both spatially and temporally, processing occurs at an average of 1000 frames per second, or equivalently takes a small fraction of the CPU. Second, we present an algorithm using image entropy to detect and classify corridor junctions from low resolution images. Because entropy can be used to perceive depth, it can be used to detect an open corridor in a set of images recorded by turning a robot at a junction by 360 degrees. Our algorithm involves detecting peaks from continuously measured entropy values and determining the angular distance between the detected peaks to determine the type of junction that was recorded (either middle, L-junction, T-junction, dead-end, or cross junction). We show that the same algorithm can be used to detect open corridors from both monocular as well as omnidirectional images. Third, we propose a minimalistic corridor representation consisting of the orientation line (center) and the wall-floor boundaries (lateral limit). The representation is extracted from low-resolution images using a novel combination of information theoretic measures and gradient cues. Our study investigates the impact of image resolution upon the accuracy of extracting such a geometry, showing that centerline and wall-floor boundaries can be estimated with reasonable accuracy even in texture-poor environments with low-resolution images. In a database of 7 unique corridor sequences for orientation measurements, less than 2% additional error was observed as the resolution of the image decreased by 99.9%
Monocular Vision SLAM for Indoor Aerial Vehicles
This paper presents a novel indoor navigation and ranging strategy by using a monocular camera. The proposed algorithms are integrated with simultaneous localization and mapping (SLAM) with a focus on indoor aerial vehicle applications. We experimentally validate the proposed algorithms by using a fully self-contained micro aerial vehicle (MAV) with on-board image processing and SLAM capabilities. The range measurement strategy is inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals. The navigation strategy assumes an unknown, GPS-denied environment, which is representable via corner-like feature points and straight architectural lines. Experimental results show that the system is only limited by the capabilities of the camera and the availability of good corners
Navigation framework using visual landmarks and a GIS
In an unfamiliar environment we spot and explore all available information which might guide us to a desired location. This largely unconscious processing is done by our trained sensory and cognitive systems. These recognise and memorise sets of landmarks which allow us to create a mental map of the environment, and this map enables us to navigate by exploiting very few but the most important landmarks stored in our memory. In this paper we present a route planning, localisation and navigation system which works in real time. It integrates a geographic information system of a building with visual landmarks for localising the user and for validating the navigation route. Although designed for visually impaired persons, the system can also be employed to assist or transport persons with reduced mobility in way finding in a complex building. © 2013 The Authors. Published by Elsevier B.V
Indoor localization and navigation for blind persons using visual landmarks and a GIS
In an unfamiliar environment we spot and explore all available information which might guide us to a desired location. This
largely unconscious processing is done by our trained sensory a
nd cognitive systems. These recognize and memorize sets of
landmarks which allow us to create a mental map of the envi
ronment, and this map enables us to navigate by exploiting
very few but the most important landmarks stored in our
memory. We present a system which integrates a geographic
information system of a building with visu
al landmarks for localizing the user in the building and for tracing and validating
a route for the user's navigation. Hence, the developed
system complements the white cane for improving the user's
autonomy during indoor navigation. Although de
signed for visually impaired persons, the system can be used by any person
for wayfinding in a complex building
- …