14,392 research outputs found
Graph-to-Sequence Learning using Gated Graph Neural Networks
Many NLP applications can be framed as a graph-to-sequence learning problem.
Previous work proposing neural architectures on this setting obtained promising
results compared to grammar-based approaches but still rely on linearisation
heuristics and/or standard recurrent networks to achieve the best performance.
In this work, we propose a new model that encodes the full structural
information contained in the graph. Our architecture couples the recently
proposed Gated Graph Neural Networks with an input transformation that allows
nodes and edges to have their own hidden representations, while tackling the
parameter explosion problem present in previous work. Experimental results show
that our model outperforms strong baselines in generation from AMR graphs and
syntax-based neural machine translation.Comment: ACL 201
On the Stability of Gated Graph Neural Networks
In this paper, we aim to find the conditions for input-state stability (ISS)
and incremental input-state stability (ISS) of Gated Graph Neural
Networks (GGNNs). We show that this recurrent version of Graph Neural Networks
(GNNs) can be expressed as a dynamical distributed system and, as a
consequence, can be analysed using model-based techniques to assess its
stability and robustness properties. Then, the stability criteria found can be
exploited as constraints during the training process to enforce the internal
stability of the neural network. Two distributed control examples, flocking and
multi-robot motion control, show that using these conditions increases the
performance and robustness of the gated GNNs
A comparison between Recurrent Neural Networks and classical machine learning approaches In Laser induced breakdown spectroscopy
Recurrent Neural Networks are classes of Artificial Neural Networks that
establish connections between different nodes form a directed or undirected
graph for temporal dynamical analysis. In this research, the laser induced
breakdown spectroscopy (LIBS) technique is used for quantitative analysis of
aluminum alloys by different Recurrent Neural Network (RNN) architecture. The
fundamental harmonic (1064 nm) of a nanosecond Nd:YAG laser pulse is employed
to generate the LIBS plasma for the prediction of constituent concentrations of
the aluminum standard samples. Here, Recurrent Neural Networks based on
different networks, such as Long Short Term Memory (LSTM), Gated Recurrent Unit
(GRU), Simple Recurrent Neural Network (Simple RNN), and as well as Recurrent
Convolutional Networks comprising of Conv-SimpleRNN, Conv-LSTM and Conv-GRU are
utilized for concentration prediction. Then a comparison is performed among
prediction by classical machine learning methods of support vector regressor
(SVR), the Multi Layer Perceptron (MLP), Decision Tree algorithm, Gradient
Boosting Regression (GBR), Random Forest Regression (RFR), Linear Regression,
and k-Nearest Neighbor (KNN) algorithm. Results showed that the machine
learning tools based on Convolutional Recurrent Networks had the best
efficiencies in prediction of the most of the elements among other multivariate
methods
Session-based Recommendation with Graph Neural Networks
The problem of session-based recommendation aims to predict user actions
based on anonymous sessions. Previous methods model a session as a sequence and
estimate user representations besides item representations to make
recommendations. Though achieved promising results, they are insufficient to
obtain accurate user vectors in sessions and neglect complex transitions of
items. To obtain accurate item embedding and take complex transitions of items
into account, we propose a novel method, i.e. Session-based Recommendation with
Graph Neural Networks, SR-GNN for brevity. In the proposed method, session
sequences are modeled as graph-structured data. Based on the session graph, GNN
can capture complex transitions of items, which are difficult to be revealed by
previous conventional sequential methods. Each session is then represented as
the composition of the global preference and the current interest of that
session using an attention network. Extensive experiments conducted on two real
datasets show that SR-GNN evidently outperforms the state-of-the-art
session-based recommendation methods consistently.Comment: 9 pages, 4 figures, accepted by AAAI Conference on Artificial
Intelligence (AAAI-19
- …