19,010 research outputs found
Recurrent Neural Network Based Early Prediction of Future Hand Movements
This work focuses on a system for hand prostheses that can overcome the delay problem introduced by classical approaches while being reliable. The proposed approach based on a recurrent neural network enables us to incorporate the sequential nature of the surface electromyogram data and the proposed system can be used either for classification or early prediction of hand movements. Especially the latter is a key to a latency free steering of a prosthesis. The experiments conducted on the first three Ninapro databases reveal that the prediction up to 200 ms ahead in the future is possible without a significant drop in accuracy. Furthermore, for classification, our proposed approach outperforms the state of the art classifiers even though we used significantly shorter windows for feature extraction
Predictive Coding for Dynamic Visual Processing: Development of Functional Hierarchy in a Multiple Spatio-Temporal Scales RNN Model
The current paper proposes a novel predictive coding type neural network
model, the predictive multiple spatio-temporal scales recurrent neural network
(P-MSTRNN). The P-MSTRNN learns to predict visually perceived human whole-body
cyclic movement patterns by exploiting multiscale spatio-temporal constraints
imposed on network dynamics by using differently sized receptive fields as well
as different time constant values for each layer. After learning, the network
becomes able to proactively imitate target movement patterns by inferring or
recognizing corresponding intentions by means of the regression of prediction
error. Results show that the network can develop a functional hierarchy by
developing a different type of dynamic structure at each layer. The paper
examines how model performance during pattern generation as well as predictive
imitation varies depending on the stage of learning. The number of limit cycle
attractors corresponding to target movement patterns increases as learning
proceeds. And, transient dynamics developing early in the learning process
successfully perform pattern generation and predictive imitation tasks. The
paper concludes that exploitation of transient dynamics facilitates successful
task performance during early learning periods.Comment: Accepted in Neural Computation (MIT press
Action-Conditional Video Prediction using Deep Networks in Atari Games
Motivated by vision-based reinforcement learning (RL) problems, in particular
Atari games from the recent benchmark Aracade Learning Environment (ALE), we
consider spatio-temporal prediction problems where future (image-)frames are
dependent on control variables or actions as well as previous frames. While not
composed of natural scenes, frames in Atari games are high-dimensional in size,
can involve tens of objects with one or more objects being controlled by the
actions directly and many other objects being influenced indirectly, can
involve entry and departure of objects, and can involve deep partial
observability. We propose and evaluate two deep neural network architectures
that consist of encoding, action-conditional transformation, and decoding
layers based on convolutional neural networks and recurrent neural networks.
Experimental results show that the proposed architectures are able to generate
visually-realistic frames that are also useful for control over approximately
100-step action-conditional futures in some games. To the best of our
knowledge, this paper is the first to make and evaluate long-term predictions
on high-dimensional video conditioned by control inputs.Comment: Published at NIPS 2015 (Advances in Neural Information Processing
Systems 28
Toward a self-organizing pre-symbolic neural model representing sensorimotor primitives
Copyright ©2014 Zhong, Cangelosi and Wermter.This is an open-access article distributed under the terms of the Creative Commons Attribution License (CCBY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these termsThe acquisition of symbolic and linguistic representations of sensorimotor behavior is a cognitive process performed by an agent when it is executing and/or observing own and others' actions. According to Piaget's theory of cognitive development, these representations develop during the sensorimotor stage and the pre-operational stage. We propose a model that relates the conceptualization of the higher-level information from visual stimuli to the development of ventral/dorsal visual streams. This model employs neural network architecture incorporating a predictive sensory module based on an RNNPB (Recurrent Neural Network with Parametric Biases) and a horizontal product model. We exemplify this model through a robot passively observing an object to learn its features and movements. During the learning process of observing sensorimotor primitives, i.e., observing a set of trajectories of arm movements and its oriented object features, the pre-symbolic representation is self-organized in the parametric units. These representational units act as bifurcation parameters, guiding the robot to recognize and predict various learned sensorimotor primitives. The pre-symbolic representation also accounts for the learning of sensorimotor primitives in a latent learning context.Peer reviewedFinal Published versio
Deep representation learning for human motion prediction and classification
Generative models of 3D human motion are often restricted to a small number
of activities and can therefore not generalize well to novel movements or
applications. In this work we propose a deep learning framework for human
motion capture data that learns a generic representation from a large corpus of
motion capture data and generalizes well to new, unseen, motions. Using an
encoding-decoding network that learns to predict future 3D poses from the most
recent past, we extract a feature representation of human motion. Most work on
deep learning for sequence prediction focuses on video and speech. Since
skeletal data has a different structure, we present and evaluate different
network architectures that make different assumptions about time dependencies
and limb correlations. To quantify the learned features, we use the output of
different layers for action classification and visualize the receptive fields
of the network units. Our method outperforms the recent state of the art in
skeletal motion prediction even though these use action specific training data.
Our results show that deep feedforward networks, trained from a generic mocap
database, can successfully be used for feature extraction from human motion
data and that this representation can be used as a foundation for
classification and prediction.Comment: This paper is published at the IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), 201
Prediction of Human Trajectory Following a Haptic Robotic Guide Using Recurrent Neural Networks
Social intelligence is an important requirement for enabling robots to
collaborate with people. In particular, human path prediction is an essential
capability for robots in that it prevents potential collision with a human and
allows the robot to safely make larger movements. In this paper, we present a
method for predicting the trajectory of a human who follows a haptic robotic
guide without using sight, which is valuable for assistive robots that aid the
visually impaired. We apply a deep learning method based on recurrent neural
networks using multimodal data: (1) human trajectory, (2) movement of the
robotic guide, (3) haptic input data measured from the physical interaction
between the human and the robot, (4) human depth data. We collected actual
human trajectory and multimodal response data through indoor experiments. Our
model outperformed the baseline result while using only the robot data with the
observed human trajectory, and it shows even better results when using
additional haptic and depth data.Comment: 6 pages, Submitted to IEEE World Haptics Conference 201
- …