4 research outputs found
Modelling active bio-inspired object recognition in autonomous mobile agents
Object recognition is arguably one of the main tasks carried out by the visual cortex. This task has been studied for decades and is one of the main topics being investigated in the computer vision field. While vertebrates perform this task with exceptional reliability and in very short amounts of time, the visual processes involved are still not completely understood. Considering the desirable properties of the visual systems in nature, many models have been proposed to not only match their performance in object recognition tasks, but also to study and understand the object recognition processes in the brain. One important point most of the classical models have failed to consider when modelling object recognition is the fact that all the visual systems in nature are active. Active object recognition opens different perspectives in contrast with the classical isolated way of modelling neural processes such as the exploitation of the body to aid the perceptual processes. Biologically inspired models are a good alternative to study embodied object recognition since animals are a working example that demonstrates that object recognition can be performed with great efficiency in an active manner. In this thesis I study biologically inspired models for object recognition from an active perspective. I demonstrate that by considering the problem of object recognition from this perspective, the computational complexity present in some of the classical models of object recognition can be reduced. In particular, chapter 3 compares a simple V1-like model (RBF model) with a complex hierarchical model (HMAX model) under certain conditions which make the RBF model perform as the HMAX model when using a simple attentional mechanism. Additionally, I compare the RBF and HMAX model with some other visual systems using well-known object libraries. This comparison demonstrates that the performance of the implementations of the RBF and HMAX models employed in this thesis is similar to the performance of other state-of-the-art visual systems. In chapter 4, I study the role of sensors in the neural dynamics of controllers and the behaviour of simulated agents. I also show how to employ an Evolutionary Robotics approach to study autonomous mobile agents performing visually guided tasks. In addition, in chapter 5 I investigate whether the variation in the visual information, which is determined by simple movements of an agent, can impact the performance of the RBF and HMAX models. In chapter 6 I investigate the impact of several movement strategies in the recognition performance of the models. In particular I study the impact of the variation in visual information using different movement strategies to collect training views. In addition, I show that temporal information can be exploited to improve the object recognition performance using movement strategies. In chapter 7 experiments to study the exploitation of movement and temporal information are carried out in a real world scenario using a robot. These experiments validate the results obtained in simulations in the previous chapters. Finally, in chapter 8 I show that by exploiting regularities in the visual input imposed by movement in the selection of training views, the complexity of the RBF model can be reduced in a real robot. The approach of this work proposes to gradually increase the complexity of the processes involved in active object recognition, from studying the role of moving the focus of attention while comparing object recognition models in static tasks, to analysing the exploitation of an active approach in the selection of training views for a object recognition task in a real world robot
Mobile robot vavigation using a vision based approach
PhD ThesisThis study addresses the issue of vision based mobile robot navigation in a partially
cluttered indoor environment using a mapless navigation strategy. The work focuses on
two key problems, namely vision based obstacle avoidance and vision based reactive
navigation strategy.
The estimation of optical flow plays a key role in vision based obstacle avoidance
problems, however the current view is that this technique is too sensitive to noise and
distortion under real conditions. Accordingly, practical applications in real time robotics
remain scarce. This dissertation presents a novel methodology for vision based obstacle
avoidance, using a hybrid architecture. This integrates an appearance-based obstacle
detection method into an optical flow architecture based upon a behavioural control
strategy that includes a new arbitration module. This enhances the overall performance
of conventional optical flow based navigation systems, enabling a robot to successfully
move around without experiencing collisions.
Behaviour based approaches have become the dominant methodologies for designing
control strategies for robot navigation. Two different behaviour based navigation
architectures have been proposed for the second problem, using monocular vision as the
primary sensor and equipped with a 2-D range finder. Both utilize an accelerated
version of the Scale Invariant Feature Transform (SIFT) algorithm. The first
architecture employs a qualitative-based control algorithm to steer the robot towards a
goal whilst avoiding obstacles, whereas the second employs an intelligent control
framework. This allows the components of soft computing to be integrated into the
proposed SIFT-based navigation architecture, conserving the same set of behaviours
and system structure of the previously defined architecture. The intelligent framework
incorporates a novel distance estimation technique using the scale parameters obtained
from the SIFT algorithm. The technique employs scale parameters and a corresponding
zooming factor as inputs to train a neural network which results in the determination of
physical distance. Furthermore a fuzzy controller is designed and integrated into this
framework so as to estimate linear velocity, and a neural network based solution is
adopted to estimate the steering direction of the robot. As a result, this intelligent
iv
approach allows the robot to successfully complete its task in a smooth and robust
manner without experiencing collision.
MS Robotics Studio software was used to simulate the systems, and a modified Pioneer
3-DX mobile robot was used for real-time implementation. Several realistic scenarios
were developed and comprehensive experiments conducted to evaluate the performance
of the proposed navigation systems.
KEY WORDS: Mobile robot navigation using vision, Mapless navigation, Mobile
robot architecture, Distance estimation, Vision for obstacle avoidance, Scale Invariant
Feature Transforms, Intelligent framework