36,940 research outputs found
Robots That Do Not Avoid Obstacles
The motion planning problem is a fundamental problem in robotics, so that
every autonomous robot should be able to deal with it. A number of solutions
have been proposed and a probabilistic one seems to be quite reasonable.
However, here we propose a more adoptive solution that uses fuzzy set theory
and we expose this solution next to a sort survey on the recent theory of soft
robots, for a future qualitative comparison between the two.Comment: To appear in the Handbook of Nonlinear Analysis, Edt Th. Rassias,
Springe
Autonomous robots path planning: An adaptive roadmap approach
Developing algorithms that allow robots to independently navigate unknown environments is a widely researched area of robotics. The potential for autonomous mobile robots use, in industrial and military applications, is boundless. Path planning entails computing a collision free path from a robots current position to a desired target. The problem of path planning for these robots remains underdeveloped. Computational complexity, path optimization and robustness are some of the issues that arise. Current algorithms do not generate general solutions for different situations and require user experience and optimization. Classical algorithms are computationally extensive. This reduces the possibility of their use in real time applications. Additionally, classical algorithms do not allow for any control over attributes of the generated path. A new roadmap path planning algorithm is proposed in this paper. This method generates waypoints, through which the robot can avoid obstacles and reach its goal. At the heart of this algorithm is a method to control the distance of the waypoints from obstacles, without increasing its computational complexity. Several simulations were run to illustrate the robustness and adaptability of this approach, compared to the most commonly used path planning methods
Towards Monocular Vision based Obstacle Avoidance through Deep Reinforcement Learning
Obstacle avoidance is a fundamental requirement for autonomous robots which
operate in, and interact with, the real world. When perception is limited to
monocular vision avoiding collision becomes significantly more challenging due
to the lack of 3D information. Conventional path planners for obstacle
avoidance require tuning a number of parameters and do not have the ability to
directly benefit from large datasets and continuous use. In this paper, a
dueling architecture based deep double-Q network (D3QN) is proposed for
obstacle avoidance, using only monocular RGB vision. Based on the dueling and
double-Q mechanisms, D3QN can efficiently learn how to avoid obstacles in a
simulator even with very noisy depth information predicted from RGB image.
Extensive experiments show that D3QN enables twofold acceleration on learning
compared with a normal deep Q network and the models trained solely in virtual
environments can be directly transferred to real robots, generalizing well to
various new environments with previously unseen dynamic objects.Comment: Accepted by RSS 2017 workshop New Frontiers for Deep Learning in
Robotic
A cloud-assisted design for autonomous driving
This paper presents Carcel, a cloud-assisted system for autonomous driving. Carcel enables the cloud to have access to sensor data from autonomous vehicles as well as the roadside infrastructure. The cloud assists autonomous vehicles that use this system to avoid obstacles such as pedestrians and other vehicles that may not be directly detected by sensors on the vehicle. Further, Carcel enables vehicles to plan efficient paths that account for unexpected events such as road-work or accidents.
We evaluate a preliminary prototype of Carcel on a state-of-the-art autonomous driving system in an outdoor testbed including an autonomous golf car and six iRobot Create robots. Results show that Carcel reduces the average time vehicles need to detect obstacles such as pedestrians by 4.6x compared to today's systems that do not have access to the cloud.Smart.fmNational Science Foundation (U.S.
Cold Diffusion on the Replay Buffer: Learning to Plan from Known Good States
Learning from demonstrations (LfD) has successfully trained robots to exhibit
remarkable generalization capabilities. However, many powerful imitation
techniques do not prioritize the feasibility of the robot behaviors they
generate. In this work, we explore the feasibility of plans produced by LfD. As
in prior work, we employ a temporal diffusion model with fixed start and goal
states to facilitate imitation through in-painting. Unlike previous studies, we
apply cold diffusion to ensure the optimization process is directed through the
agent's replay buffer of previously visited states. This routing approach
increases the likelihood that the final trajectories will predominantly occupy
the feasible region of the robot's state space. We test this method in
simulated robotic environments with obstacles and observe a significant
improvement in the agent's ability to avoid these obstacles during planning
Towards Optimally Decentralized Multi-Robot Collision Avoidance via Deep Reinforcement Learning
Developing a safe and efficient collision avoidance policy for multiple
robots is challenging in the decentralized scenarios where each robot generate
its paths without observing other robots' states and intents. While other
distributed multi-robot collision avoidance systems exist, they often require
extracting agent-level features to plan a local collision-free action, which
can be computationally prohibitive and not robust. More importantly, in
practice the performance of these methods are much lower than their centralized
counterparts.
We present a decentralized sensor-level collision avoidance policy for
multi-robot systems, which directly maps raw sensor measurements to an agent's
steering commands in terms of movement velocity. As a first step toward
reducing the performance gap between decentralized and centralized methods, we
present a multi-scenario multi-stage training framework to find an optimal
policy which is trained over a large number of robots on rich, complex
environments simultaneously using a policy gradient based reinforcement
learning algorithm. We validate the learned sensor-level collision avoidance
policy in a variety of simulated scenarios with thorough performance
evaluations and show that the final learned policy is able to find time
efficient, collision-free paths for a large-scale robot system. We also
demonstrate that the learned policy can be well generalized to new scenarios
that do not appear in the entire training period, including navigating a
heterogeneous group of robots and a large-scale scenario with 100 robots.
Videos are available at https://sites.google.com/view/drlmac
- …