35,543 research outputs found
Artificial Intelligence and Systems Theory: Applied to Cooperative Robots
This paper describes an approach to the design of a population of cooperative
robots based on concepts borrowed from Systems Theory and Artificial
Intelligence. The research has been developed under the SocRob project, carried
out by the Intelligent Systems Laboratory at the Institute for Systems and
Robotics - Instituto Superior Tecnico (ISR/IST) in Lisbon. The acronym of the
project stands both for "Society of Robots" and "Soccer Robots", the case study
where we are testing our population of robots. Designing soccer robots is a
very challenging problem, where the robots must act not only to shoot a ball
towards the goal, but also to detect and avoid static (walls, stopped robots)
and dynamic (moving robots) obstacles. Furthermore, they must cooperate to
defeat an opposing team. Our past and current research in soccer robotics
includes cooperative sensor fusion for world modeling, object recognition and
tracking, robot navigation, multi-robot distributed task planning and
coordination, including cooperative reinforcement learning in cooperative and
adversarial environments, and behavior-based architectures for real time task
execution of cooperating robot teams
This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer
We address the problem of controlling the workspace of a 3-DoF mobile robot.
In a human-robot shared space, robots should navigate in a human-acceptable way
according to the users' demands. For this purpose, we employ virtual borders,
that are non-physical borders, to allow a user the restriction of the robot's
workspace. To this end, we propose an interaction method based on a laser
pointer to intuitively define virtual borders. This interaction method uses a
previously developed framework based on robot guidance to change the robot's
navigational behavior. Furthermore, we extend this framework to increase the
flexibility by considering different types of virtual borders, i.e. polygons
and curves separating an area. We evaluated our method with 15 non-expert users
concerning correctness, accuracy and teaching time. The experimental results
revealed a high accuracy and linear teaching time with respect to the border
length while correctly incorporating the borders into the robot's navigational
map. Finally, our user study showed that non-expert users can employ our
interaction method.Comment: Accepted at 2019 Third IEEE International Conference on Robotic
Computing (IRC), supplementary video: https://youtu.be/lKsGp8xtyI
A vision-guided parallel parking system for a mobile robot using approximate policy iteration
Reinforcement Learning (RL) methods enable autonomous robots to learn skills from scratch by interacting with the environment. However, reinforcement learning can be very time consuming. This paper focuses on accelerating the reinforcement learning process on a mobile robot in an unknown environment. The presented algorithm is based on approximate policy iteration with a continuous state space and a fixed number of actions. The action-value function is represented by a weighted combination of basis functions.
Furthermore, a complexity analysis is provided to show that the implemented approach is guaranteed to converge on an optimal policy with less computational time.
A parallel parking task is selected for testing purposes. In the experiments, the efficiency of the proposed approach is demonstrated and analyzed through a set of simulated and real robot experiments, with comparison drawn from two well known algorithms (Dyna-Q and Q-learning)
Q Learning Behavior on Autonomous Navigation of Physical Robot
Behavior based architecture gives robot fast and reliable action. If there are many behaviors in robot, behavior coordination is needed. Subsumption architecture is behavior coordination method that give quick and robust response. Learning mechanism improve robot’s performance in handling uncertainty. Q learning is popular reinforcement learning method that has been used in robot learning because it is simple, convergent and off
policy. In this paper, Q learning will be used as learning mechanism for obstacle avoidance behavior in autonomous robot navigation. Learning rate of Q learning affect robot’s performance in learning phase. As the result,
Q learning algorithm is successfully implemented in a physical robot with its imperfect environment
Towards an Architecture for Semiautonomous Robot Telecontrol Systems.
The design and development of a computational system to support robot–operator collaboration is a challenging task, not only because of the overall system complexity, but furthermore because of the involvement of different technical and scientific disciplines, namely, Software Engineering, Psychology and Artificial Intelligence, among others. In our opinion the approach generally used to face this type of project is based on system architectures inherited from the development of autonomous robots and therefore fails to incorporate explicitly the role of the operator, i.e. these architectures lack a view that help the operator to see him/herself as an integral part of the system. The goal of this paper is to provide a human-centered paradigm that makes it possible to create this kind of view of the system architecture. This architectural description includes the definition of the role of operator and autonomous behaviour of the robot, it identifies the shared knowledge, and it helps the operator to see the robot as an intentional being as himself/herself
- …