397,386 research outputs found
Exploiting Vestibular Output during Learning Results in Naturally Curved Reaching Trajectories
Teaching a humanoid robot to reach for a
visual target is a complex problem in part because
of the high dimensionality of the control
space. In this paper, we demonstrate a biologically
plausible simplification of the reaching
process that replaces the degrees of freedom
in the neck of the robot with sensory readings
from a vestibular system. We show that
this simplification introduces errors that are
easily overcome by a standard learning algorithm.
Furthermore, the errors that are necessarily
introduced by this simplification result
in reaching trajectories that are curved in the
same way as human reaching trajectories
SOVEREIGN: An Autonomous Neural System for Incrementally Learning Planned Action Sequences to Navigate Towards a Rewarded Goal
How do reactive and planned behaviors interact in real time? How are sequences of such behaviors released at appropriate times during autonomous navigation to realize valued goals? Controllers for both animals and mobile robots, or animats, need reactive mechanisms for exploration, and learned plans to reach goal objects once an environment becomes familiar. The SOVEREIGN (Self-Organizing, Vision, Expectation, Recognition, Emotion, Intelligent, Goaloriented Navigation) animat model embodies these capabilities, and is tested in a 3D virtual reality environment. SOVEREIGN includes several interacting subsystems which model complementary properties of cortical What and Where processing streams and which clarify similarities between mechanisms for navigation and arm movement control. As the animat explores an environment, visual inputs are processed by networks that are sensitive to visual form and motion in the What and Where streams, respectively. Position-invariant and sizeinvariant recognition categories are learned by real-time incremental learning in the What stream. Estimates of target position relative to the animat are computed in the Where stream, and can activate approach movements toward the target. Motion cues from animat locomotion can elicit head-orienting movements to bring a new target into view. Approach and orienting movements are alternately performed during animat navigation. Cumulative estimates of each movement are derived from interacting proprioceptive and visual cues. Movement sequences are stored within a motor working memory. Sequences of visual categories are stored in a sensory working memory. These working memories trigger learning of sensory and motor sequence categories, or plans, which together control planned movements. Predictively effective chunk combinations are selectively enhanced via reinforcement learning when the animat is rewarded. Selected planning chunks effect a gradual transition from variable reactive exploratory movements to efficient goal-oriented planned movement sequences. Volitional signals gate interactions between model subsystems and the release of overt behaviors. The model can control different motor sequences under different motivational states and learns more efficient sequences to rewarded goals as exploration proceeds.Riverside Reserach Institute; Defense Advanced Research Projects Agency (N00014-92-J-4015); Air Force Office of Scientific Research (F49620-92-J-0225); National Science Foundation (IRI 90-24877, SBE-0345378); Office of Naval Research (N00014-92-J-1309, N00014-91-J-4100, N00014-01-1-0624, N00014-01-1-0624); Pacific Sierra Research (PSR 91-6075-2
Learning and Acting in Peripersonal Space: Moving, Reaching, and Grasping
The young infant explores its body, its sensorimotor system, and the
immediately accessible parts of its environment, over the course of a few
months creating a model of peripersonal space useful for reaching and grasping
objects around it. Drawing on constraints from the empirical literature on
infant behavior, we present a preliminary computational model of this learning
process, implemented and evaluated on a physical robot. The learning agent
explores the relationship between the configuration space of the arm, sensing
joint angles through proprioception, and its visual perceptions of the hand and
grippers. The resulting knowledge is represented as the peripersonal space
(PPS) graph, where nodes represent states of the arm, edges represent safe
movements, and paths represent safe trajectories from one pose to another. In
our model, the learning process is driven by intrinsic motivation. When
repeatedly performing an action, the agent learns the typical result, but also
detects unusual outcomes, and is motivated to learn how to make those unusual
results reliable. Arm motions typically leave the static background unchanged,
but occasionally bump an object, changing its static position. The reach action
is learned as a reliable way to bump and move an object in the environment.
Similarly, once a reliable reach action is learned, it typically makes a
quasi-static change in the environment, moving an object from one static
position to another. The unusual outcome is that the object is accidentally
grasped (thanks to the innate Palmar reflex), and thereafter moves dynamically
with the hand. Learning to make grasps reliable is more complex than for
reaches, but we demonstrate significant progress. Our current results are steps
toward autonomous sensorimotor learning of motion, reaching, and grasping in
peripersonal space, based on unguided exploration and intrinsic motivation.Comment: 35 pages, 13 figure
Goal Set Inverse Optimal Control and Iterative Re-planning for Predicting Human Reaching Motions in Shared Workspaces
To enable safe and efficient human-robot collaboration in shared workspaces
it is important for the robot to predict how a human will move when performing
a task. While predicting human motion for tasks not known a priori is very
challenging, we argue that single-arm reaching motions for known tasks in
collaborative settings (which are especially relevant for manufacturing) are
indeed predictable. Two hypotheses underlie our approach for predicting such
motions: First, that the trajectory the human performs is optimal with respect
to an unknown cost function, and second, that human adaptation to their
partner's motion can be captured well through iterative re-planning with the
above cost function. The key to our approach is thus to learn a cost function
which "explains" the motion of the human. To do this, we gather example
trajectories from pairs of participants performing a collaborative assembly
task using motion capture. We then use Inverse Optimal Control to learn a cost
function from these trajectories. Finally, we predict reaching motions from the
human's current configuration to a task-space goal region by iteratively
re-planning a trajectory using the learned cost function. Our planning
algorithm is based on the trajectory optimizer STOMP, it plans for a 23 DoF
human kinematic model and accounts for the presence of a moving collaborator
and obstacles in the environment. Our results suggest that in most cases, our
method outperforms baseline methods when predicting motions. We also show that
our method outperforms baselines for predicting human motion when a human and a
robot share the workspace.Comment: 12 pages, Accepted for publication IEEE Transaction on Robotics 201
Inside the brain of an elite athlete: The neural processes that support high achievement in sports
Events like the World Championships in athletics and the Olympic Games raise the public profile of competitive sports. They may also leave us wondering what sets the competitors in these events apart from those of us who simply watch. Here we attempt to link neural and cognitive processes that have been found to be important for elite performance with computational and physiological theories inspired by much simpler laboratory tasks. In this way we hope to inspire neuroscientists to consider how their basic research might help to explain sporting skill at the highest levels of performance
Towards learning domain-independent planning heuristics
Automated planning remains one of the most general paradigms in Artificial
Intelligence, providing means of solving problems coming from a wide variety of
domains. One of the key factors restricting the applicability of planning is
its computational complexity resulting from exponentially large search spaces.
Heuristic approaches are necessary to solve all but the simplest problems. In
this work, we explore the possibility of obtaining domain-independent heuristic
functions using machine learning. This is a part of a wider research program
whose objective is to improve practical applicability of planning in systems
for which the planning domains evolve at run time. The challenge is therefore
the learning of (corrections of) domain-independent heuristics that can be
reused across different planning domains.Comment: Accepted for the IJCAI-17 Workshop on Architectures for Generality
and Autonom
- …