13,029 research outputs found
Mobile robot vavigation using a vision based approach
PhD ThesisThis study addresses the issue of vision based mobile robot navigation in a partially
cluttered indoor environment using a mapless navigation strategy. The work focuses on
two key problems, namely vision based obstacle avoidance and vision based reactive
navigation strategy.
The estimation of optical flow plays a key role in vision based obstacle avoidance
problems, however the current view is that this technique is too sensitive to noise and
distortion under real conditions. Accordingly, practical applications in real time robotics
remain scarce. This dissertation presents a novel methodology for vision based obstacle
avoidance, using a hybrid architecture. This integrates an appearance-based obstacle
detection method into an optical flow architecture based upon a behavioural control
strategy that includes a new arbitration module. This enhances the overall performance
of conventional optical flow based navigation systems, enabling a robot to successfully
move around without experiencing collisions.
Behaviour based approaches have become the dominant methodologies for designing
control strategies for robot navigation. Two different behaviour based navigation
architectures have been proposed for the second problem, using monocular vision as the
primary sensor and equipped with a 2-D range finder. Both utilize an accelerated
version of the Scale Invariant Feature Transform (SIFT) algorithm. The first
architecture employs a qualitative-based control algorithm to steer the robot towards a
goal whilst avoiding obstacles, whereas the second employs an intelligent control
framework. This allows the components of soft computing to be integrated into the
proposed SIFT-based navigation architecture, conserving the same set of behaviours
and system structure of the previously defined architecture. The intelligent framework
incorporates a novel distance estimation technique using the scale parameters obtained
from the SIFT algorithm. The technique employs scale parameters and a corresponding
zooming factor as inputs to train a neural network which results in the determination of
physical distance. Furthermore a fuzzy controller is designed and integrated into this
framework so as to estimate linear velocity, and a neural network based solution is
adopted to estimate the steering direction of the robot. As a result, this intelligent
iv
approach allows the robot to successfully complete its task in a smooth and robust
manner without experiencing collision.
MS Robotics Studio software was used to simulate the systems, and a modified Pioneer
3-DX mobile robot was used for real-time implementation. Several realistic scenarios
were developed and comprehensive experiments conducted to evaluate the performance
of the proposed navigation systems.
KEY WORDS: Mobile robot navigation using vision, Mapless navigation, Mobile
robot architecture, Distance estimation, Vision for obstacle avoidance, Scale Invariant
Feature Transforms, Intelligent framework
Conceptual spatial representations for indoor mobile robots
We present an approach for creating conceptual representations of human-made indoor environments using mobile
robots. The concepts refer to spatial and functional properties of typical indoor environments. Following ļ¬ndings
in cognitive psychology, our model is composed of layers representing maps at diļ¬erent levels of abstraction. The
complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition.
The system also incorporates a linguistic framework that actively supports the map acquisition process, and which
is used for situated dialogue. Finally, we discuss the capabilities of the integrated system
Monocular navigation for long-term autonomy
We present a reliable and robust monocular navigation system for an autonomous vehicle.
The proposed method is computationally efficient, needs off-the-shelf equipment only and does not require any additional infrastructure like radio beacons or GPS.
Contrary to traditional localization algorithms, which use advanced mathematical methods to determine vehicle position, our method uses a more practical approach.
In our case, an image-feature-based monocular vision technique determines only the heading of the vehicle while the vehicle's odometry is used to estimate the distance traveled.
We present a mathematical proof and experimental evidence indicating that the localization error of a robot guided by this principle is bound.
The experiments demonstrate that the method can cope with variable illumination, lighting deficiency and both short- and long-term environment changes.
This makes the method especially suitable for deployment in scenarios which require long-term autonomous operation
Robot control based on qualitative representation of human trajectories
A major challenge for future social robots is the high-level interpretation of human motion, and the consequent generation of appropriate robot actions. This paper describes some fundamental steps towards the real-time implementation of a system that allows a mobile robot to transform quantitative information about human trajectories (i.e. coordinates and speed) into qualitative concepts, and from these to generate appropriate control commands. The problem is formulated using a simple version of qualitative trajectory calculus, then solved using an inference engine based on fuzzy temporal logic and situation graph trees. Preliminary results are discussed and future directions of the current research are drawn
Long-term experiments with an adaptive spherical view representation for navigation in changing environments
Real-world environments such as houses and offices change over time, meaning that a mobile robotās map will become out of date. In this work, we introduce a method to update the reference views in a hybrid metric-topological map so that a mobile robot can continue to localize itself in a changing environment. The updating mechanism, based on the multi-store model of human memory, incorporates a spherical metric representation of the observed visual features for each node in the map, which enables the robot to estimate its heading and navigate using multi-view geometry, as well as representing the local 3D geometry of the environment. A series of experiments demonstrate the persistence performance of the proposed system in real changing environments, including analysis of the long-term stability
Navigation without localisation: reliable teach and repeat based on the convergence theorem
We present a novel concept for teach-and-repeat visual navigation. The
proposed concept is based on a mathematical model, which indicates that in
teach-and-repeat navigation scenarios, mobile robots do not need to perform
explicit localisation. Rather than that, a mobile robot which repeats a
previously taught path can simply `replay' the learned velocities, while using
its camera information only to correct its heading relative to the intended
path. To support our claim, we establish a position error model of a robot,
which traverses a taught path by only correcting its heading. Then, we outline
a mathematical proof which shows that this position error does not diverge over
time. Based on the insights from the model, we present a simple monocular
teach-and-repeat navigation method. The method is computationally efficient, it
does not require camera calibration, and it can learn and autonomously traverse
arbitrarily-shaped paths. In a series of experiments, we demonstrate that the
method can reliably guide mobile robots in realistic indoor and outdoor
conditions, and can cope with imperfect odometry, landmark deficiency,
illumination variations and naturally-occurring environment changes.
Furthermore, we provide the navigation system and the datasets gathered at
http://www.github.com/gestom/stroll_bearnav.Comment: The paper will be presented at IROS 2018 in Madri
Social Attention: Modeling Attention in Human Crowds
Robots that navigate through human crowds need to be able to plan safe,
efficient, and human predictable trajectories. This is a particularly
challenging problem as it requires the robot to predict future human
trajectories within a crowd where everyone implicitly cooperates with each
other to avoid collisions. Previous approaches to human trajectory prediction
have modeled the interactions between humans as a function of proximity.
However, that is not necessarily true as some people in our immediate vicinity
moving in the same direction might not be as important as other people that are
further away, but that might collide with us in the future. In this work, we
propose Social Attention, a novel trajectory prediction model that captures the
relative importance of each person when navigating in the crowd, irrespective
of their proximity. We demonstrate the performance of our method against a
state-of-the-art approach on two publicly available crowd datasets and analyze
the trained attention model to gain a better understanding of which surrounding
agents humans attend to, when navigating in a crowd
Artificial Intelligence and Systems Theory: Applied to Cooperative Robots
This paper describes an approach to the design of a population of cooperative
robots based on concepts borrowed from Systems Theory and Artificial
Intelligence. The research has been developed under the SocRob project, carried
out by the Intelligent Systems Laboratory at the Institute for Systems and
Robotics - Instituto Superior Tecnico (ISR/IST) in Lisbon. The acronym of the
project stands both for "Society of Robots" and "Soccer Robots", the case study
where we are testing our population of robots. Designing soccer robots is a
very challenging problem, where the robots must act not only to shoot a ball
towards the goal, but also to detect and avoid static (walls, stopped robots)
and dynamic (moving robots) obstacles. Furthermore, they must cooperate to
defeat an opposing team. Our past and current research in soccer robotics
includes cooperative sensor fusion for world modeling, object recognition and
tracking, robot navigation, multi-robot distributed task planning and
coordination, including cooperative reinforcement learning in cooperative and
adversarial environments, and behavior-based architectures for real time task
execution of cooperating robot teams
- ā¦