567 research outputs found
A Comprehensive Review on Autonomous Navigation
The field of autonomous mobile robots has undergone dramatic advancements
over the past decades. Despite achieving important milestones, several
challenges are yet to be addressed. Aggregating the achievements of the robotic
community as survey papers is vital to keep the track of current
state-of-the-art and the challenges that must be tackled in the future. This
paper tries to provide a comprehensive review of autonomous mobile robots
covering topics such as sensor types, mobile robot platforms, simulation tools,
path planning and following, sensor fusion methods, obstacle avoidance, and
SLAM. The urge to present a survey paper is twofold. First, autonomous
navigation field evolves fast so writing survey papers regularly is crucial to
keep the research community well-aware of the current status of this field.
Second, deep learning methods have revolutionized many fields including
autonomous navigation. Therefore, it is necessary to give an appropriate
treatment of the role of deep learning in autonomous navigation as well which
is covered in this paper. Future works and research gaps will also be
discussed
GPU Computing for Cognitive Robotics
This thesis presents the first investigation of the impact of GPU
computing on cognitive robotics by providing a series of novel experiments in
the area of action and language acquisition in humanoid robots and computer
vision. Cognitive robotics is concerned with endowing robots with high-level
cognitive capabilities to enable the achievement of complex goals in complex
environments. Reaching the ultimate goal of developing cognitive robots will
require tremendous amounts of computational power, which was until
recently provided mostly by standard CPU processors. CPU cores are
optimised for serial code execution at the expense of parallel execution, which
renders them relatively inefficient when it comes to high-performance
computing applications. The ever-increasing market demand for
high-performance, real-time 3D graphics has evolved the GPU into a highly
parallel, multithreaded, many-core processor extraordinary computational
power and very high memory bandwidth. These vast computational resources
of modern GPUs can now be used by the most of the cognitive robotics models
as they tend to be inherently parallel. Various interesting and insightful
cognitive models were developed and addressed important scientific questions
concerning action-language acquisition and computer vision. While they have
provided us with important scientific insights, their complexity and
application has not improved much over the last years. The experimental
tasks as well as the scale of these models are often minimised to avoid
excessive training times that grow exponentially with the number of neurons
and the training data. This impedes further progress and development of
complex neurocontrollers that would be able to take the cognitive robotics
research a step closer to reaching the ultimate goal of creating intelligent
machines. This thesis presents several cases where the application of the GPU
computing on cognitive robotics algorithms resulted in the development of
large-scale neurocontrollers of previously unseen complexity enabling the
conducting of the novel experiments described herein.European Commission Seventh Framework
Programm
Proceedings of the 9th Conference on Autonomous Robot Systems and Competitions
Welcome to ROBOTICA 2009. This is the 9th edition of the conference on Autonomous Robot Systems and Competitions, the third time with IEEE‐Robotics and Automation Society Technical Co‐Sponsorship. Previous editions were held since 2001 in Guimarães, Aveiro, Porto, Lisboa, Coimbra and Algarve. ROBOTICA 2009 is held on the 7th May, 2009, in Castelo Branco , Portugal.
ROBOTICA has received 32 paper submissions, from 10 countries, in South America, Asia and Europe. To evaluate each submission, three reviews by paper were performed by the international program committee. 23 papers were published in the proceedings and presented at the conference. Of these, 14 papers were selected for oral presentation and 9 papers were selected for poster presentation. The global acceptance ratio was 72%.
After the conference, eighth papers will be published in the Portuguese journal Robótica, and the best student paper will be published in IEEE Multidisciplinary Engineering Education Magazine.
Three prizes will be awarded in the conference for: the best conference paper, the best student paper and the best presentation. The last two, sponsored by the IEEE Education Society ‐ Student Activities Committee.
We would like to express our thanks to all participants. First of all to the authors, whose quality work is the essence of this conference. Next, to all the members of the international program committee and reviewers, who helped us with their expertise and valuable time. We would also like to deeply thank the invited speaker, Jean Paul Laumond, LAAS‐CNRS France, for their excellent contribution in the field of humanoid robots. Finally, a word of appreciation for the hard work of the secretariat and volunteers.
Our deep gratitude goes to the Scientific Organisations that kindly agreed to sponsor the Conference, and made it come true.
We look forward to seeing more results of R&D work on Robotics at ROBOTICA 2010, somewhere in Portugal
Computational intelligence approaches to robotics, automation, and control [Volume guest editors]
No abstract available
An Approach for Multi-Robot Opportunistic Coexistence in Shared Space
This thesis considers a situation in which multiple robots operate in the
same environment towards the achievement of different tasks. In this situation,
please consider that not only the tasks, but also the robots themselves
are likely be heterogeneous, i.e., different from each other in their
morphology, dynamics, sensors, capabilities, etc. As an example, think
about a "smart hotel": small wheeled robots are likely to be devoted to
cleaning floors, whereas a humanoid robot may be devoted to social interaction,
e.g., welcoming guests and providing relevant information to
them upon request.
Under these conditions, robots are required not only to co-exist, but also
to coordinate their activity if we want them to exhibit a coherent and
effective behavior: this may range from mutual avoidance to avoid collisions,
to a more explicit coordinated behavior, e.g., task assignment or
cooperative localization.
The issues above have been deeply investigated in the Literature. Among
the topics that may play a crucial role to design a successful system, this
thesis focuses on the following ones:
(i) An integrated approach for path following and obstacle avoidance is
applied to unicycle type robots, by extending an existing algorithm [1]
initially developed for the single robot case to the multi-robot domain.
The approach is based on the definition of the path to be followed as a
curve f (x;y) in space, while obstacles are modeled as Gaussian functions
that modify the original function, generating a resulting safe path. The
attractiveness of this methodology which makes it look very simple, is
that it neither requires the computation of a projection of the robot position
on the path, nor does it need to consider a moving virtual target
to be tracked. The performance of the proposed approach is analyzed
by means of a series of experiments performed in dynamic environments
with unicycle-type robots by integrating and determining the position of
robot using odometry and in Motion capturing environment.
(ii) We investigate the problem of multi-robot cooperative localization
in dynamic environments. Specifically, we propose an approach where
wheeled robots are localized using the monocular camera embedded in
the head of a Pepper humanoid robot, to the end of minimizing deviations
from their paths and avoiding each other during navigation tasks.
Indeed, position estimation requires obtaining a linear relationship between
points in the image and points in the world frame: to this end, an
Inverse Perspective mapping (IPM) approach has been adopted to transform
the acquired image into a bird eye view of the environment. The
scenario is made more complex by the fact that Pepper\u2019s head is moving
dynamically while tracking the wheeled robots, which requires to consider
a different IPM transformation matrix whenever the attitude (Pitch
and Yaw) of the camera changes. Finally, the IPM position estimate returned
by Pepper is merged with the estimate returned by the odometry
of the wheeled robots through an Extened Kalman Filter. Experiments
are shown with multiple robots moving along different paths in a shared
space, by avoiding each other without onboard sensors, i.e., by relying
only on mutual positioning information.
Software for implementing the theoretical models described above have
been developed in ROS, and validated by performing real experiments
with two types of robots, namely: (i) a unicycle wheeled Roomba robot(commercially available all over the world), (ii) Pepper Humanoid robot
(commercially available in Japan and B2B model in Europe)
Subsumption architecture for enabling strategic coordination of robot swarms in a gaming scenario
The field of swarm robotics breaks away from traditional research by maximizing the performance of a group - swarm - of limited robots instead of optimizing the intelligence of a single robot. Similar to current-generation strategy video games, the player controls groups of units - squads - instead of the individual participants. These individuals are rather unintelligent robots, capable of little more than navigating and using their weapons. However, clever control of the squads of autonomous robots by the game players can make for intense, strategic matches.
The gaming framework presented in this article provides players with strategic coordination of robot squads. The developed swarm intelligence techniques break up complex squad commands into several commands for each robot using robot formations and path finding while avoiding obstacles. These algorithms are validated through a 'Capture the Flag' gaming scenario where a complex squad command is split up into several robot commands in a matter of milliseconds
Towards adaptive and autonomous humanoid robots: from vision to actions
Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions
- …