17 research outputs found

    Fixed Size Ordinally-Forgetting Encoding and its Applications

    Get PDF
    In this thesis, we propose the new Fixed-size Ordinally-Forgetting Encoding (FOFE) method, which can almost uniquely encode any variable-length sequence of words into a fixed-size representation. FOFE can model the word order in a sequence using a simple ordinally-forgetting mechanism according to the positions of words. We address two fundamental problems in natural language processing, namely, Language Modeling (LM) and Named Entity Recognition (NER). We have applied FOFE to FeedForward Neural Network Language Models (FFNN-LMs). Experimental results have shown that without using any recurrent feedbacks, FOFE-FFNN-LMs significantly outperform not only the standard fixed-input FFNN-LMs but also some popular Recurrent Neural Network Language Models (RNN-LMs). Instead of treating NER as a sequence labeling problem, we propose a new local detection approach, which relies on FOFE to fully encode each sentence fragment and its left/right contexts into a fixed-size representation. This local detection approach has shown many advantages over the traditional sequence labeling methods. Our method has yielded pretty strong performance in all tasks we have examined

    Dual Fixed-Size Ordinally Forgetting Encoding (FOFE) For Natural Language Processing

    Get PDF
    In this thesis, we propose a new approach to employ fixed-size ordinally-forgetting encoding (FOFE) on Natural Language Processing (NLP) tasks, called dual-FOFE. The main idea behind dual-FOFE is that it allows the encoding to be done with two different forgetting factors; this would resolve the original FOFEs dilemma in choosing between the benefits offered by having either small or large values for its single forgetting factor. For this research, we have conducted our experiments on two prominent NLP tasks, namely, language modelling and machine reading comprehension. Our experiment results shown that the dual-FOFE provide a definite improvement over the original FOFE by approximately 11% in perplexity (PPL) for language modelling task and 8% in Exact Match (EM) score for machine reading comprehension task

    A General FOFE-net Framework for Simple and Effective Question Answering over Knowledge Bases

    Get PDF
    Question answering over knowledge base (KB-QA) has recently become a popular research topic in NLP. One of the popular ways to solve the KBQA problem is to make use of a pipeline of several NLP modules, including entity discovery and linking (EDL) and relation detection. Recent success on KBQA task usually involves complex network structures with sophisticated heuristics. Inspired by a previous work that builds a strong KBQA baseline, we propose a simple but general neural model composed of fixed-size ordinally forgetting encoding (FOFE) and deep neural networks, called FOFE-net to solve KB-QA problem at different stages. For evaluation, we use two popular KB-QA datasets, SimpleQuestions, WebQSP, and our newly created dataset, FreebaseQA. The experimental results show that FOFE-net performs well on KBQA subtasks, entity discovery and linking (EDL) and relation detection, and in turn pushing overall KB-QA system to achieve strong results on all the datasets
    corecore