54,108 research outputs found

    Finding Answers from the Word of God: Domain Adaptation for Neural Networks in Biblical Question Answering

    Full text link
    Question answering (QA) has significantly benefitted from deep learning techniques in recent years. However, domain-specific QA remains a challenge due to the significant amount of data required to train a neural network. This paper studies the answer sentence selection task in the Bible domain and answer questions by selecting relevant verses from the Bible. For this purpose, we create a new dataset BibleQA based on bible trivia questions and propose three neural network models for our task. We pre-train our models on a large-scale QA dataset, SQuAD, and investigate the effect of transferring weights on model accuracy. Furthermore, we also measure the model accuracies with different answer context lengths and different Bible translations. We affirm that transfer learning has a noticeable improvement in the model accuracy. We achieve relatively good results with shorter context lengths, whereas longer context lengths decreased model accuracy. We also find that using a more modern Bible translation in the dataset has a positive effect on the task.Comment: The paper has been accepted at IJCNN 201

    Learning Visual Question Answering by Bootstrapping Hard Attention

    Full text link
    Attention mechanisms in biological perception are thought to select subsets of perceptual information for more sophisticated processing which would be prohibitive to perform on all sensory inputs. In computer vision, however, there has been relatively little exploration of hard attention, where some information is selectively ignored, in spite of the success of soft attention, where information is re-weighted and aggregated, but never filtered out. Here, we introduce a new approach for hard attention and find it achieves very competitive performance on a recently-released visual question answering datasets, equalling and in some cases surpassing similar soft attention architectures while entirely ignoring some features. Even though the hard attention mechanism is thought to be non-differentiable, we found that the feature magnitudes correlate with semantic relevance, and provide a useful signal for our mechanism's attentional selection criterion. Because hard attention selects important features of the input information, it can also be more efficient than analogous soft attention mechanisms. This is especially important for recent approaches that use non-local pairwise operations, whereby computational and memory costs are quadratic in the size of the set of features.Comment: ECCV 201

    The Laminar Organization of Visual Cortex: A Unified View of Development, Learning, and Grouping

    Full text link
    Why are all sensory and cognitive neocortex organized into layered circuits? How do these layers organize circuits that form functional columns in cortical maps? How do bottom-up, top-down, and horizontal interactions within the cortical layers generate adaptive behaviors. This chapter summarizes an evolving neural model which suggests how these interactions help the visual cortex to realize: (1) the binding process whereby cortex groups distributed data into coherent object representations; (2) the attentional process whereby cortex selectively processes important events; and (3) the developmental and learning processes whereby cortex shapes its circuits to match environmental constraints. It is suggested that the mechanisms which achieve property (3) imply properties of (I) and (2). New computational ideas about feedback systems suggest how neocortex develops and learns in a stable way, and why top-down attention requires converging bottom-up inputs to fully activate cortical cells, whereas perceptual groupings do not.Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333); Office of Naval Research (N00014-95-1-0657
    • …
    corecore