1,132 research outputs found
Towards Interpretable Deep Learning Models for Knowledge Tracing
As an important technique for modeling the knowledge states of learners, the
traditional knowledge tracing (KT) models have been widely used to support
intelligent tutoring systems and MOOC platforms. Driven by the fast
advancements of deep learning techniques, deep neural network has been recently
adopted to design new KT models for achieving better prediction performance.
However, the lack of interpretability of these models has painfully impeded
their practical applications, as their outputs and working mechanisms suffer
from the intransparent decision process and complex inner structures. We thus
propose to adopt the post-hoc method to tackle the interpretability issue for
deep learning based knowledge tracing (DLKT) models. Specifically, we focus on
applying the layer-wise relevance propagation (LRP) method to interpret
RNN-based DLKT model by backpropagating the relevance from the model's output
layer to its input layer. The experiment results show the feasibility using the
LRP method for interpreting the DLKT model's predictions, and partially
validate the computed relevance scores from both question level and concept
level. We believe it can be a solid step towards fully interpreting the DLKT
models and promote their practical applications in the education domain
End to End Deep Neural Network Frequency Demodulation of Speech Signals
Frequency modulation (FM) is a form of radio broadcasting which is widely
used nowadays and has been for almost a century. We suggest a
software-defined-radio (SDR) receiver for FM demodulation that adopts an
end-to-end learning based approach and utilizes the prior information of
transmitted speech message in the demodulation process. The receiver detects
and enhances speech from the in-phase and quadrature components of its base
band version. The new system yields high performance detection for both
acoustical disturbances, and communication channel noise and is foreseen to
out-perform the established methods for low signal to noise ratio (SNR)
conditions in both mean square error and in perceptual evaluation of speech
quality score
Evolutionary multi-stage financial scenario tree generation
Multi-stage financial decision optimization under uncertainty depends on a
careful numerical approximation of the underlying stochastic process, which
describes the future returns of the selected assets or asset categories.
Various approaches towards an optimal generation of discrete-time,
discrete-state approximations (represented as scenario trees) have been
suggested in the literature. In this paper, a new evolutionary algorithm to
create scenario trees for multi-stage financial optimization models will be
presented. Numerical results and implementation details conclude the paper
Zero-shot keyword spotting for visual speech recognition in-the-wild
Visual keyword spotting (KWS) is the problem of estimating whether a text
query occurs in a given recording using only video information. This paper
focuses on visual KWS for words unseen during training, a real-world, practical
setting which so far has received no attention by the community. To this end,
we devise an end-to-end architecture comprising (a) a state-of-the-art visual
feature extractor based on spatiotemporal Residual Networks, (b) a
grapheme-to-phoneme model based on sequence-to-sequence neural networks, and
(c) a stack of recurrent neural networks which learn how to correlate visual
features with the keyword representation. Different to prior works on KWS,
which try to learn word representations merely from sequences of graphemes
(i.e. letters), we propose the use of a grapheme-to-phoneme encoder-decoder
model which learns how to map words to their pronunciation. We demonstrate that
our system obtains very promising visual-only KWS results on the challenging
LRS2 database, for keywords unseen during training. We also show that our
system outperforms a baseline which addresses KWS via automatic speech
recognition (ASR), while it drastically improves over other recently proposed
ASR-free KWS methods.Comment: Accepted at ECCV-201
Deep Tree Transductions - A Short Survey
The paper surveys recent extensions of the Long-Short Term Memory networks to
handle tree structures from the perspective of learning non-trivial forms of
isomorph structured transductions. It provides a discussion of modern TreeLSTM
models, showing the effect of the bias induced by the direction of tree
processing. An empirical analysis is performed on real-world benchmarks,
highlighting how there is no single model adequate to effectively approach all
transduction problems.Comment: To appear in the Proceedings of the 2019 INNS Big Data and Deep
Learning (INNSBDDL 2019). arXiv admin note: text overlap with
arXiv:1809.0909
Recurrent Fully Convolutional Neural Networks for Multi-slice MRI Cardiac Segmentation
In cardiac magnetic resonance imaging, fully-automatic segmentation of the
heart enables precise structural and functional measurements to be taken, e.g.
from short-axis MR images of the left-ventricle. In this work we propose a
recurrent fully-convolutional network (RFCN) that learns image representations
from the full stack of 2D slices and has the ability to leverage inter-slice
spatial dependences through internal memory units. RFCN combines anatomical
detection and segmentation into a single architecture that is trained
end-to-end thus significantly reducing computational time, simplifying the
segmentation pipeline, and potentially enabling real-time applications. We
report on an investigation of RFCN using two datasets, including the publicly
available MICCAI 2009 Challenge dataset. Comparisons have been carried out
between fully convolutional networks and deep restricted Boltzmann machines,
including a recurrent version that leverages inter-slice spatial correlation.
Our studies suggest that RFCN produces state-of-the-art results and can
substantially improve the delineation of contours near the apex of the heart.Comment: MICCAI Workshop RAMBO 201
Deep Autoencoder for Combined Human Pose Estimation and body Model Upscaling
We present a method for simultaneously estimating 3D human pose and body
shape from a sparse set of wide-baseline camera views. We train a symmetric
convolutional autoencoder with a dual loss that enforces learning of a latent
representation that encodes skeletal joint positions, and at the same time
learns a deep representation of volumetric body shape. We harness the latter to
up-scale input volumetric data by a factor of , whilst recovering a
3D estimate of joint positions with equal or greater accuracy than the state of
the art. Inference runs in real-time (25 fps) and has the potential for passive
human behaviour monitoring where there is a requirement for high fidelity
estimation of human body shape and pose
Learning Visual Question Answering by Bootstrapping Hard Attention
Attention mechanisms in biological perception are thought to select subsets
of perceptual information for more sophisticated processing which would be
prohibitive to perform on all sensory inputs. In computer vision, however,
there has been relatively little exploration of hard attention, where some
information is selectively ignored, in spite of the success of soft attention,
where information is re-weighted and aggregated, but never filtered out. Here,
we introduce a new approach for hard attention and find it achieves very
competitive performance on a recently-released visual question answering
datasets, equalling and in some cases surpassing similar soft attention
architectures while entirely ignoring some features. Even though the hard
attention mechanism is thought to be non-differentiable, we found that the
feature magnitudes correlate with semantic relevance, and provide a useful
signal for our mechanism's attentional selection criterion. Because hard
attention selects important features of the input information, it can also be
more efficient than analogous soft attention mechanisms. This is especially
important for recent approaches that use non-local pairwise operations, whereby
computational and memory costs are quadratic in the size of the set of
features.Comment: ECCV 201
Label-Dependencies Aware Recurrent Neural Networks
In the last few years, Recurrent Neural Networks (RNNs) have proved effective
on several NLP tasks. Despite such great success, their ability to model
\emph{sequence labeling} is still limited. This lead research toward solutions
where RNNs are combined with models which already proved effective in this
domain, such as CRFs. In this work we propose a solution far simpler but very
effective: an evolution of the simple Jordan RNN, where labels are re-injected
as input into the network, and converted into embeddings, in the same way as
words. We compare this RNN variant to all the other RNN models, Elman and
Jordan RNN, LSTM and GRU, on two well-known tasks of Spoken Language
Understanding (SLU). Thanks to label embeddings and their combination at the
hidden layer, the proposed variant, which uses more parameters than Elman and
Jordan RNNs, but far fewer than LSTM and GRU, is more effective than other
RNNs, but also outperforms sophisticated CRF models.Comment: 22 pages, 3 figures. Accepted at CICling 2017 conference. Best
Verifiability, Reproducibility, and Working Description awar
Scene Coordinate Regression with Angle-Based Reprojection Loss for Camera Relocalization
Image-based camera relocalization is an important problem in computer vision
and robotics. Recent works utilize convolutional neural networks (CNNs) to
regress for pixels in a query image their corresponding 3D world coordinates in
the scene. The final pose is then solved via a RANSAC-based optimization scheme
using the predicted coordinates. Usually, the CNN is trained with ground truth
scene coordinates, but it has also been shown that the network can discover 3D
scene geometry automatically by minimizing single-view reprojection loss.
However, due to the deficiencies of the reprojection loss, the network needs to
be carefully initialized. In this paper, we present a new angle-based
reprojection loss, which resolves the issues of the original reprojection loss.
With this new loss function, the network can be trained without careful
initialization, and the system achieves more accurate results. The new loss
also enables us to utilize available multi-view constraints, which further
improve performance.Comment: ECCV 2018 Workshop (Geometry Meets Deep Learning
- …