1,873 research outputs found
Interpretable Structure-Evolving LSTM
This paper develops a general framework for learning interpretable data
representation via Long Short-Term Memory (LSTM) recurrent neural networks over
hierarchal graph structures. Instead of learning LSTM models over the pre-fixed
structures, we propose to further learn the intermediate interpretable
multi-level graph structures in a progressive and stochastic way from data
during the LSTM network optimization. We thus call this model the
structure-evolving LSTM. In particular, starting with an initial element-level
graph representation where each node is a small data element, the
structure-evolving LSTM gradually evolves the multi-level graph representations
by stochastically merging the graph nodes with high compatibilities along the
stacked LSTM layers. In each LSTM layer, we estimate the compatibility of two
connected nodes from their corresponding LSTM gate outputs, which is used to
generate a merging probability. The candidate graph structures are accordingly
generated where the nodes are grouped into cliques with their merging
probabilities. We then produce the new graph structure with a
Metropolis-Hasting algorithm, which alleviates the risk of getting stuck in
local optimums by stochastic sampling with an acceptance probability. Once a
graph structure is accepted, a higher-level graph is then constructed by taking
the partitioned cliques as its nodes. During the evolving process,
representation becomes more abstracted in higher-levels where redundant
information is filtered out, allowing more efficient propagation of long-range
data dependencies. We evaluate the effectiveness of structure-evolving LSTM in
the application of semantic object parsing and demonstrate its advantage over
state-of-the-art LSTM models on standard benchmarks.Comment: To appear in CVPR 2017 as a spotlight pape
Tree Memory Networks for Modelling Long-term Temporal Dependencies
In the domain of sequence modelling, Recurrent Neural Networks (RNN) have
been capable of achieving impressive results in a variety of application areas
including visual question answering, part-of-speech tagging and machine
translation. However this success in modelling short term dependencies has not
successfully transitioned to application areas such as trajectory prediction,
which require capturing both short term and long term relationships. In this
paper, we propose a Tree Memory Network (TMN) for modelling long term and short
term relationships in sequence-to-sequence mapping problems. The proposed
network architecture is composed of an input module, controller and a memory
module. In contrast to related literature, which models the memory as a
sequence of historical states, we model the memory as a recursive tree
structure. This structure more effectively captures temporal dependencies
across both short term and long term sequences using its hierarchical
structure. We demonstrate the effectiveness and flexibility of the proposed TMN
in two practical problems, aircraft trajectory modelling and pedestrian
trajectory modelling in a surveillance setting, and in both cases we outperform
the current state-of-the-art. Furthermore, we perform an in depth analysis on
the evolution of the memory module content over time and provide visual
evidence on how the proposed TMN is able to map both long term and short term
relationships efficiently via a hierarchical structure
Exploiting Cognitive Structure for Adaptive Learning
Adaptive learning, also known as adaptive teaching, relies on learning path
recommendation, which sequentially recommends personalized learning items
(e.g., lectures, exercises) to satisfy the unique needs of each learner.
Although it is well known that modeling the cognitive structure including
knowledge level of learners and knowledge structure (e.g., the prerequisite
relations) of learning items is important for learning path recommendation,
existing methods for adaptive learning often separately focus on either
knowledge levels of learners or knowledge structure of learning items. To fully
exploit the multifaceted cognitive structure for learning path recommendation,
we propose a Cognitive Structure Enhanced framework for Adaptive Learning,
named CSEAL. By viewing path recommendation as a Markov Decision Process and
applying an actor-critic algorithm, CSEAL can sequentially identify the right
learning items to different learners. Specifically, we first utilize a
recurrent neural network to trace the evolving knowledge levels of learners at
each learning step. Then, we design a navigation algorithm on the knowledge
structure to ensure the logicality of learning paths, which reduces the search
space in the decision process. Finally, the actor-critic algorithm is used to
determine what to learn next and whose parameters are dynamically updated along
the learning path. Extensive experiments on real-world data demonstrate the
effectiveness and robustness of CSEAL.Comment: Accepted by KDD 2019 Research Track. In Proceedings of the 25th ACM
SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD'19
- …