7,329 research outputs found
Recommended from our members
Neurons and symbols: a manifesto
We discuss the purpose of neural-symbolic integration including its principles, mechanisms and applications. We outline a cognitive computational model for neural-symbolic integration, position the model in the broader context of multi-agent systems, machine learning and automated reasoning, and list some of the challenges for the area of
neural-symbolic computation to achieve the promise of effective integration of robust learning and expressive reasoning under uncertainty
Connectionist Theory Refinement: Genetically Searching the Space of Network Topologies
An algorithm that learns from a set of examples should ideally be able to
exploit the available resources of (a) abundant computing power and (b)
domain-specific knowledge to improve its ability to generalize. Connectionist
theory-refinement systems, which use background knowledge to select a neural
network's topology and initial weights, have proven to be effective at
exploiting domain-specific knowledge; however, most do not exploit available
computing power. This weakness occurs because they lack the ability to refine
the topology of the neural networks they produce, thereby limiting
generalization, especially when given impoverished domain theories. We present
the REGENT algorithm which uses (a) domain-specific knowledge to help create an
initial population of knowledge-based neural networks and (b) genetic operators
of crossover and mutation (specifically designed for knowledge-based networks)
to continually search for better network topologies. Experiments on three
real-world domains indicate that our new algorithm is able to significantly
increase generalization compared to a standard connectionist theory-refinement
system, as well as our previous algorithm for growing knowledge-based networks.Comment: See http://www.jair.org/ for any accompanying file
Learning Hard Alignments with Variational Inference
There has recently been significant interest in hard attention models for
tasks such as object recognition, visual captioning and speech recognition.
Hard attention can offer benefits over soft attention such as decreased
computational cost, but training hard attention models can be difficult because
of the discrete latent variables they introduce. Previous work used REINFORCE
and Q-learning to approach these issues, but those methods can provide
high-variance gradient estimates and be slow to train. In this paper, we tackle
the problem of learning hard attention for a sequential task using variational
inference methods, specifically the recently introduced VIMCO and NVIL.
Furthermore, we propose a novel baseline that adapts VIMCO to this setting. We
demonstrate our method on a phoneme recognition task in clean and noisy
environments and show that our method outperforms REINFORCE, with the
difference being greater for a more complicated task
Self-Attention Networks for Connectionist Temporal Classification in Speech Recognition
The success of self-attention in NLP has led to recent applications in
end-to-end encoder-decoder architectures for speech recognition. Separately,
connectionist temporal classification (CTC) has matured as an alignment-free,
non-autoregressive approach to sequence transduction, either by itself or in
various multitask and decoding frameworks. We propose SAN-CTC, a deep, fully
self-attentional network for CTC, and show it is tractable and competitive for
end-to-end speech recognition. SAN-CTC trains quickly and outperforms existing
CTC models and most encoder-decoder models, with character error rates (CERs)
of 4.7% in 1 day on WSJ eval92 and 2.8% in 1 week on LibriSpeech test-clean,
with a fixed architecture and one GPU. Similar improvements hold for WERs after
LM decoding. We motivate the architecture for speech, evaluate position and
downsampling approaches, and explore how label alphabets (character, phoneme,
subword) affect attention heads and performance.Comment: Accepted to ICASSP 201
Order-Preserving Abstractive Summarization for Spoken Content Based on Connectionist Temporal Classification
Connectionist temporal classification (CTC) is a powerful approach for
sequence-to-sequence learning, and has been popularly used in speech
recognition. The central ideas of CTC include adding a label "blank" during
training. With this mechanism, CTC eliminates the need of segment alignment,
and hence has been applied to various sequence-to-sequence learning problems.
In this work, we applied CTC to abstractive summarization for spoken content.
The "blank" in this case implies the corresponding input data are less
important or noisy; thus it can be ignored. This approach was shown to
outperform the existing methods in term of ROUGE scores over Chinese Gigaword
and MATBN corpora. This approach also has the nice property that the ordering
of words or characters in the input documents can be better preserved in the
generated summaries.Comment: Accepted by Interspeech 201
- …