5,825 research outputs found
An Open-Source Simulator for Cognitive Robotics Research: The Prototype of the iCub Humanoid Robot Simulator
This paper presents the prototype of a new computer simulator for the humanoid robot iCub. The iCub is a new open-source humanoid robot developed as a result of the “RobotCub” project, a collaborative European project aiming at developing a new open-source cognitive robotics platform. The iCub simulator has been developed as part of a joint effort with the European project “ITALK” on the integration and transfer of action and language knowledge in cognitive robots. This is available open-source to all researchers interested in cognitive robotics experiments with the iCub humanoid platform
AltURI: a thin middleware for simulated robot vision applications
Fast software performance is often the focus when developing real-time vision-based control applications for robot simulators. In this paper we have developed a thin, high performance middleware for USARSim and other simulators designed for real-time vision-based control applications. It includes a fast image server providing images in OpenCV, Matlab or web formats and a simple command/sensor processor. The interface has been tested in USARSim with an Unmanned Aerial Vehicle using two control applications; landing using a reinforcement learning algorithm and altitude control using elementary motion detection. The middleware has been found to be fast enough to control the flying robot as well as very easy to set up and use
Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning
Obstacle avoidance is a fundamental requirement for autonomous robots which
operate in, and interact with, the real world. When perception is limited to
monocular vision avoiding collision becomes significantly more challenging due
to the lack of 3D information. Conventional path planners for obstacle
avoidance require tuning a number of parameters and do not have the ability to
directly benefit from large datasets and continuous use. In this paper, a
dueling architecture based deep double-Q network (D3QN) is proposed for
obstacle avoidance, using only monocular RGB vision. Based on the dueling and
double-Q mechanisms, D3QN can efficiently learn how to avoid obstacles in a
simulator even with very noisy depth information predicted from RGB image.
Extensive experiments show that D3QN enables twofold acceleration on learning
compared with a normal deep Q network and the models trained solely in virtual
environments can be directly transferred to real robots, generalizing well to
various new environments with previously unseen dynamic objects.Comment: Accepted by RSS 2017 workshop New Frontiers for Deep Learning in
Robotic
Torque-Controlled Stepping-Strategy Push Recovery: Design and Implementation on the iCub Humanoid Robot
One of the challenges for the robotics community is to deploy robots which
can reliably operate in real world scenarios together with humans. A crucial
requirement for legged robots is the capability to properly balance on their
feet, rejecting external disturbances. iCub is a state-of-the-art humanoid
robot which has only recently started to balance on its feet. While the current
balancing controller has proved successful in various scenarios, it still
misses the capability to properly react to strong pushes by taking steps. This
paper goes in this direction. It proposes and implements a control strategy
based on the Capture Point concept [1]. Instead of relying on position control,
like most of Capture Point related approaches, the proposed strategy generates
references for the momentum-based torque controller already implemented on the
iCub, thus extending its capabilities to react to external disturbances, while
retaining the advantages of torque control when interacting with the
environment. Experiments in the Gazebo simulator and on the iCub humanoid robot
validate the proposed strategy
Learning object relationships which determine the outcome of actions
Peer reviewedPublisher PD
- …