74 research outputs found
Hierarchical RNN with Static Sentence-Level Attention for Text-Based Speaker Change Detection
Speaker change detection (SCD) is an important task in dialog modeling. Our
paper addresses the problem of text-based SCD, which differs from existing
audio-based studies and is useful in various scenarios, for example, processing
dialog transcripts where speaker identities are missing (e.g., OpenSubtitle),
and enhancing audio SCD with textual information. We formulate text-based SCD
as a matching problem of utterances before and after a certain decision point;
we propose a hierarchical recurrent neural network (RNN) with static
sentence-level attention. Experimental results show that neural networks
consistently achieve better performance than feature-based approaches, and that
our attention-based model significantly outperforms non-attention neural
networks.Comment: In Proceedings of the ACM on Conference on Information and Knowledge
Management (CIKM), 201
Convolutional Neural Networks over Tree Structures for Programming Language Processing
Programming language processing (similar to natural language processing) is a
hot research topic in the field of software engineering; it has also aroused
growing interest in the artificial intelligence community. However, different
from a natural language sentence, a program contains rich, explicit, and
complicated structural information. Hence, traditional NLP models may be
inappropriate for programs. In this paper, we propose a novel tree-based
convolutional neural network (TBCNN) for programming language processing, in
which a convolution kernel is designed over programs' abstract syntax trees to
capture structural information. TBCNN is a generic architecture for programming
language processing; our experiments show its effectiveness in two different
program analysis tasks: classifying programs according to functionality, and
detecting code snippets of certain patterns. TBCNN outperforms baseline
methods, including several neural models for NLP.Comment: Accepted at AAAI-1
Easy over Hard: A Case Study on Deep Learning
While deep learning is an exciting new technique, the benefits of this method
need to be assessed with respect to its computational cost. This is
particularly important for deep learning since these learners need hours (to
weeks) to train the model. Such long training time limits the ability of (a)~a
researcher to test the stability of their conclusion via repeated runs with
different random seeds; and (b)~other researchers to repeat, improve, or even
refute that original work.
For example, recently, deep learning was used to find which questions in the
Stack Overflow programmer discussion forum can be linked together. That deep
learning system took 14 hours to execute. We show here that applying a very
simple optimizer called DE to fine tune SVM, it can achieve similar (and
sometimes better) results. The DE approach terminated in 10 minutes; i.e. 84
times faster hours than deep learning method.
We offer these results as a cautionary tale to the software analytics
community and suggest that not every new innovation should be applied without
critical analysis. If researchers deploy some new and expensive process, that
work should be baselined against some simpler and faster alternatives.Comment: 12 pages, 6 figures, accepted at FSE201
- …