473,627 research outputs found
Anticipating Daily Intention using On-Wrist Motion Triggered Sensing
Anticipating human intention by observing one's actions has many
applications. For instance, picking up a cellphone, then a charger (actions)
implies that one wants to charge the cellphone (intention). By anticipating the
intention, an intelligent system can guide the user to the closest power
outlet. We propose an on-wrist motion triggered sensing system for anticipating
daily intentions, where the on-wrist sensors help us to persistently observe
one's actions. The core of the system is a novel Recurrent Neural Network (RNN)
and Policy Network (PN), where the RNN encodes visual and motion observation to
anticipate intention, and the PN parsimoniously triggers the process of visual
observation to reduce computation requirement. We jointly trained the whole
network using policy gradient and cross-entropy loss. To evaluate, we collect
the first daily "intention" dataset consisting of 2379 videos with 34
intentions and 164 unique action sequences. Our method achieves 92.68%, 90.85%,
97.56% accuracy on three users while processing only 29% of the visual
observation on average
Intention-Aware Motion Planning
As robots venture into new application domains as autonomous vehicles on the road or as domestic helpers at home, they must recognize human intentions and behaviors in order to operate effectively. This paper investigates a new class of motion planning problems with uncertainty in human intention. We propose a method for constructing a practical model by assuming a finite set of unknown intentions. We first construct a motion model for each intention in the set and then combine these models together into a single Mixed Observability Markov Decision Process (MOMDP), which is a structured variant of the more common Partially Observable Markov Decision Process (POMDP). By leveraging the latest advances in POMDP/MOMDP approximation algorithms, we can construct and solve moderately complex models for interesting robotic tasks. Experiments in simulation and with an autonomous vehicle show that the proposed method outperforms common alternatives because of its ability in recognizing intentions and using the information effectively for decision making.Singapore-MIT Alliance for Research and Technology (SMART) (grant R-252- 000-447-592)Singapore-MIT GAMBIT Game Lab (grant R-252-000-398-490)Singapore. Ministry of Education (AcRF grant 2010-T2-2-071
What Will I Do Next? The Intention from Motion Experiment
In computer vision, video-based approaches have been widely explored for the
early classification and the prediction of actions or activities. However, it
remains unclear whether this modality (as compared to 3D kinematics) can still
be reliable for the prediction of human intentions, defined as the overarching
goal embedded in an action sequence. Since the same action can be performed
with different intentions, this problem is more challenging but yet affordable
as proved by quantitative cognitive studies which exploit the 3D kinematics
acquired through motion capture systems. In this paper, we bridge cognitive and
computer vision studies, by demonstrating the effectiveness of video-based
approaches for the prediction of human intentions. Precisely, we propose
Intention from Motion, a new paradigm where, without using any contextual
information, we consider instantaneous grasping motor acts involving a bottle
in order to forecast why the bottle itself has been reached (to pass it or to
place in a box, or to pour or to drink the liquid inside). We process only the
grasping onsets casting intention prediction as a classification framework.
Leveraging on our multimodal acquisition (3D motion capture data and 2D optical
videos), we compare the most commonly used 3D descriptors from cognitive
studies with state-of-the-art video-based techniques. Since the two analyses
achieve an equivalent performance, we demonstrate that computer vision tools
are effective in capturing the kinematics and facing the cognitive problem of
human intention prediction.Comment: 2017 IEEE Conference on Computer Vision and Pattern Recognition
Workshop
Recommended from our members
Exploration of neural correlates of movement intention based on characterisation of temporal dependencies in electroencephalography
Brain computer interfaces (BCIs) provide a direct communication channel by using brain signals, enabling patients with motor impairments to interact with external devices. Motion intention detection is useful for intuitive movement-based BCI as movement is the fundamental mode of interaction with the environment. The aim of this paper is to investigate the temporal dynamics of brain processes using electroencephalography (EEG) to explore novel neural correlates of motion intention. We investigate the changes in temporal dependencies of the EEG by characterising the decay of autocorrelation during asynchronous voluntary finger tapping movement. The evolution of the autocorrelation function is characterised by its relaxation time, which is used as a robust marker for motion intention. We observed that there was reorganisation of temporal dependencies in EEG during motion intention. The autocorrelation decayed slower during movement intention and faster during the resting state. There was an increase in temporal dependence during movement intention. The relaxation time of the autocorrelation function showed significant (p < 0.05) discrimination between movement and resting state with the mean sensitivity of 78.37 ± 8.83%. The relaxation time provides movement related information that is complementary to the well-known event-related desynchronisation (ERD) by characterising the broad band EEG dynamics which is frequency independent in contrast to ERD. It can also detect motion intention on average 0.51s before the actual movement onset. We have thoroughly compared autocorrelation relaxation time features with ERD in four frequency bands. The relaxation time may therefore, complement the well-known features used in motion-based BCI leading to more robust and intuitive BCI solutions. The results obtained suggest that changes in autocorrelation decay may involve reorganisation of temporal dependencies of brain activity over longer duration during motion intention. This opens the possibilities of investigating further the temporal dynamics of fundamental neural processes underpinning motion intention
Fast human motion prediction for human-robot collaboration with wearable interfaces
In this paper, we aim at improving human motion prediction during human-robot
collaboration in industrial facilities by exploiting contributions from both
physical and physiological signals. Improved human-machine collaboration could
prove useful in several areas, while it is crucial for interacting robots to
understand human movement as soon as possible to avoid accidents and injuries.
In this perspective, we propose a novel human-robot interface capable to
anticipate the user intention while performing reaching movements on a working
bench in order to plan the action of a collaborative robot. The proposed
interface can find many applications in the Industry 4.0 framework, where
autonomous and collaborative robots will be an essential part of innovative
facilities. A motion intention prediction and a motion direction prediction
levels have been developed to improve detection speed and accuracy. A Gaussian
Mixture Model (GMM) has been trained with IMU and EMG data following an
evidence accumulation approach to predict reaching direction. Novel dynamic
stopping criteria have been proposed to flexibly adjust the trade-off between
early anticipation and accuracy according to the application. The output of the
two predictors has been used as external inputs to a Finite State Machine (FSM)
to control the behaviour of a physical robot according to user's action or
inaction. Results show that our system outperforms previous methods, achieving
a real-time classification accuracy of after
from movement onset
- …
