3 research outputs found

    Dual Fixed-Size Ordinally Forgetting Encoding (FOFE) For Natural Language Processing

    Get PDF
    In this thesis, we propose a new approach to employ fixed-size ordinally-forgetting encoding (FOFE) on Natural Language Processing (NLP) tasks, called dual-FOFE. The main idea behind dual-FOFE is that it allows the encoding to be done with two different forgetting factors; this would resolve the original FOFEs dilemma in choosing between the benefits offered by having either small or large values for its single forgetting factor. For this research, we have conducted our experiments on two prominent NLP tasks, namely, language modelling and machine reading comprehension. Our experiment results shown that the dual-FOFE provide a definite improvement over the original FOFE by approximately 11% in perplexity (PPL) for language modelling task and 8% in Exact Match (EM) score for machine reading comprehension task
    corecore