52,326 research outputs found
Graph Neural Networks with Generated Parameters for Relation Extraction
Recently, progress has been made towards improving relational reasoning in
machine learning field. Among existing models, graph neural networks (GNNs) is
one of the most effective approaches for multi-hop relational reasoning. In
fact, multi-hop relational reasoning is indispensable in many natural language
processing tasks such as relation extraction. In this paper, we propose to
generate the parameters of graph neural networks (GP-GNNs) according to natural
language sentences, which enables GNNs to process relational reasoning on
unstructured text inputs. We verify GP-GNNs in relation extraction from text.
Experimental results on a human-annotated dataset and two distantly supervised
datasets show that our model achieves significant improvements compared to
baselines. We also perform a qualitative analysis to demonstrate that our model
could discover more accurate relations by multi-hop relational reasoning
Context-aware Human Motion Prediction
The problem of predicting human motion given a sequence of past observations
is at the core of many applications in robotics and computer vision. Current
state-of-the-art formulate this problem as a sequence-to-sequence task, in
which a historical of 3D skeletons feeds a Recurrent Neural Network (RNN) that
predicts future movements, typically in the order of 1 to 2 seconds. However,
one aspect that has been obviated so far, is the fact that human motion is
inherently driven by interactions with objects and/or other humans in the
environment. In this paper, we explore this scenario using a novel
context-aware motion prediction architecture. We use a semantic-graph model
where the nodes parameterize the human and objects in the scene and the edges
their mutual interactions. These interactions are iteratively learned through a
graph attention layer, fed with the past observations, which now include both
object and human body motions. Once this semantic graph is learned, we inject
it to a standard RNN to predict future movements of the human/s and object/s.
We consider two variants of our architecture, either freezing the contextual
interactions in the future of updating them. A thorough evaluation in the
"Whole-Body Human Motion Database" shows that in both cases, our context-aware
networks clearly outperform baselines in which the context information is not
considered.Comment: Accepted at CVPR2
- …