20,117 research outputs found
Obstacle-aware Adaptive Informative Path Planning for UAV-based Target Search
Target search with unmanned aerial vehicles (UAVs) is relevant problem to
many scenarios, e.g., search and rescue (SaR). However, a key challenge is
planning paths for maximal search efficiency given flight time constraints. To
address this, we propose the Obstacle-aware Adaptive Informative Path Planning
(OA-IPP) algorithm for target search in cluttered environments using UAVs. Our
approach leverages a layered planning strategy using a Gaussian Process
(GP)-based model of target occupancy to generate informative paths in
continuous 3D space. Within this framework, we introduce an adaptive replanning
scheme which allows us to trade off between information gain, field coverage,
sensor performance, and collision avoidance for efficient target detection.
Extensive simulations show that our OA-IPP method performs better than
state-of-the-art planners, and we demonstrate its application in a realistic
urban SaR scenario.Comment: Paper accepted for International Conference on Robotics and
Automation (ICRA-2019) to be held at Montreal, Canad
Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks
Autonomous robots need to interact with unknown, unstructured and changing
environments, constantly facing novel challenges. Therefore, continuous online
adaptation for lifelong-learning and the need of sample-efficient mechanisms to
adapt to changes in the environment, the constraints, the tasks, or the robot
itself are crucial. In this work, we propose a novel framework for
probabilistic online motion planning with online adaptation based on a
bio-inspired stochastic recurrent neural network. By using learning signals
which mimic the intrinsic motivation signalcognitive dissonance in addition
with a mental replay strategy to intensify experiences, the stochastic
recurrent network can learn from few physical interactions and adapts to novel
environments in seconds. We evaluate our online planning and adaptation
framework on an anthropomorphic KUKA LWR arm. The rapid online adaptation is
shown by learning unknown workspace constraints sample-efficiently from few
physical interactions while following given way points.Comment: accepted in Neural Network
Combining Subgoal Graphs with Reinforcement Learning to Build a Rational Pathfinder
In this paper, we present a hierarchical path planning framework called SG-RL
(subgoal graphs-reinforcement learning), to plan rational paths for agents
maneuvering in continuous and uncertain environments. By "rational", we mean
(1) efficient path planning to eliminate first-move lags; (2) collision-free
and smooth for agents with kinematic constraints satisfied. SG-RL works in a
two-level manner. At the first level, SG-RL uses a geometric path-planning
method, i.e., Simple Subgoal Graphs (SSG), to efficiently find optimal abstract
paths, also called subgoal sequences. At the second level, SG-RL uses an RL
method, i.e., Least-Squares Policy Iteration (LSPI), to learn near-optimal
motion-planning policies which can generate kinematically feasible and
collision-free trajectories between adjacent subgoals. The first advantage of
the proposed method is that SSG can solve the limitations of sparse reward and
local minima trap for RL agents; thus, LSPI can be used to generate paths in
complex environments. The second advantage is that, when the environment
changes slightly (i.e., unexpected obstacles appearing), SG-RL does not need to
reconstruct subgoal graphs and replan subgoal sequences using SSG, since LSPI
can deal with uncertainties by exploiting its generalization ability to handle
changes in environments. Simulation experiments in representative scenarios
demonstrate that, compared with existing methods, SG-RL can work well on
large-scale maps with relatively low action-switching frequencies and shorter
path lengths, and SG-RL can deal with small changes in environments. We further
demonstrate that the design of reward functions and the types of training
environments are important factors for learning feasible policies.Comment: 20 page
Human Arm simulation for interactive constrained environment design
During the conceptual and prototype design stage of an industrial product, it
is crucial to take assembly/disassembly and maintenance operations in advance.
A well-designed system should enable relatively easy access of operating
manipulators in the constrained environment and reduce musculoskeletal disorder
risks for those manual handling operations. Trajectory planning comes up as an
important issue for those assembly and maintenance operations under a
constrained environment, since it determines the accessibility and the other
ergonomics issues, such as muscle effort and its related fatigue. In this
paper, a customer-oriented interactive approach is proposed to partially solve
ergonomic related issues encountered during the design stage under a
constrained system for the operator's convenience. Based on a single objective
optimization method, trajectory planning for different operators could be
generated automatically. Meanwhile, a motion capture based method assists the
operator to guide the trajectory planning interactively when either a local
minimum is encountered within the single objective optimization or the operator
prefers guiding the virtual human manually. Besides that, a physical engine is
integrated into this approach to provide physically realistic simulation in
real time manner, so that collision free path and related dynamic information
could be computed to determine further muscle fatigue and accessibility of a
product designComment: International Journal on Interactive Design and Manufacturing
(IJIDeM) (2012) 1-12. arXiv admin note: substantial text overlap with
arXiv:1012.432
A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discotinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and VIP can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (NSF SBE-0354378); Office of Naval Research (N00014-01-1-0624
A Neural Model of Visually Guided Steering, Obstacle Avoidance, and Route Selection
A neural model is developed to explain how humans can approach a goal object on foot while steering around obstacles to avoid collisions in a cluttered environment. The model uses optic flow from a 3D virtual reality environment to determine the position of objects based on motion discontinuities, and computes heading direction, or the direction of self-motion, from global optic flow. The cortical representation of heading interacts with the representations of a goal and obstacles such that the goal acts as an attractor of heading, while obstacles act as repellers. In addition the model maintains fixation on the goal object by generating smooth pursuit eye movements. Eye rotations can distort the optic flow field, complicating heading perception, and the model uses extraretinal signals to correct for this distortion and accurately represent heading. The model explains how motion processing mechanisms in cortical areas MT, MST, and posterior parietal cortex can be used to guide steering. The model quantitatively simulates human psychophysical data about visually-guided steering, obstacle avoidance, and route selection.Air Force Office of Scientific Research (F4960-01-1-0397); National Geospatial-Intelligence Agency (NMA201-01-1-2016); National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
Deep Forward and Inverse Perceptual Models for Tracking and Prediction
We consider the problems of learning forward models that map state to
high-dimensional images and inverse models that map high-dimensional images to
state in robotics. Specifically, we present a perceptual model for generating
video frames from state with deep networks, and provide a framework for its use
in tracking and prediction tasks. We show that our proposed model greatly
outperforms standard deconvolutional methods and GANs for image generation,
producing clear, photo-realistic images. We also develop a convolutional neural
network model for state estimation and compare the result to an Extended Kalman
Filter to estimate robot trajectories. We validate all models on a real robotic
system.Comment: 8 pages, International Conference on Robotics and Automation (ICRA)
201
- …