5 research outputs found

    Character-level neural translation for multilingual media monitoring in the SUMMA project

    Get PDF
    The paper steps outside the comfort-zone of the traditional NLP tasks like automatic speech recognition (ASR) and machine translation (MT) to addresses two novel problems arising in the automated multilingual news monitoring: segmentation of the TV and radio program ASR transcripts into individual stories, and clustering of the individual stories coming from various sources and languages into storylines. Storyline clustering of stories covering the same events is an essential task for inquisitorial media monitoring. We address these two problems jointly by engaging the low-dimensional semantic representation capabilities of the sequence to sequence neural translation models. To enable joint multi-task learning for multilingual neural translation of morphologically rich languages we replace the attention mechanism with the sliding-window mechanism and operate the sequence to sequence neural translation model on the character-level rather than on the word-level. The story segmentation and storyline clustering problem is tackled by examining the low-dimensional vectors produced as a side-product of the neural translation process. The results of this paper describe a novel approach to the automatic story segmentation and storyline clustering problem.Comment: LREC-2016 submissio

    Compositional representations of language structures in multilingual joint-vector space

    Get PDF
    AftertherecentdevelopmentsinArtificialNeuralNetworksanddeeplearningtechniques, representation learning has become the focus of many research interests. In the field of Natural Language Processing, representation learning techniques have gained many implementation advances and improved different tasks compared to any other method. One oftheprimaryresearchtopicsinthisareaistoconstructcompositionalrepresentationsof discrete language structures in multilingual joint-vector space. In this thesis study, several techniques from deep learning and NLP are combined to investigate their potential impact on NLP tasks. For this purpose, four different composition vector models (CVM) by using tokens and morphemes as basic language structures are studied. To construct tokens and morphemes, first, a parallel corpus is segmented into discrete objects via tokenization and morphological analysis. Several hierarchical composition methods via bilingual method are employed to construct the embeddings of these structures. Bilingual models are trained by using sentence-aligned corpora for four languages. The models learn how to employ compositional vector models and construct embeddings of sentence constituents as well. Two different test scenarios are performed to evaluate different CVMs. The first one is paraphrase test. In this case, the bilingual models using CVMs are trained with each language pair L1-L2 ( English, Turkish, German and French) parallel corpus. Then the models are tested by evaluating their performance in finding the corresponding pairs correctly from 100 randomly selected sentences from each L1-L2 pair. The other test scenario is cross-lingual document classification. In this case, the trained models are employed by a document classifier model to evaluate their performance in classification task by first training in L1 documents and then testing with L2 documents.Declaration of Authorship ii Abstract iv Öz v Acknowledgments vi List of Figures ix List of Tables x Abbreviations xi 1 Background 1 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Related Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2 Representation Learning 4 2.1 Distributed Representation . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 Compositional Distributed Semantics . . . . . . . . . . . . . . . . . . . . . 5 2.3 Vector Space Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3 Natural Language Processing 8 3.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.2 Token And Tokenization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 3.3 Morpheme And Morphological Analysis . . . . . . . . . . . . . . . . . . . 9 4 Problem Definition 10 5 Methodology 12 5.1 Tokenization And Morphological Analysis . . . . . . . . . . . . . . . . . . 12 5.1.1 Out of Vocabulary Issue - OOV . . . . . . . . . . . . . . . . . . . . 13 5.2 Compositional Vector Models . . . . . . . . . . . . . . . . . . . . . . . . . 13 5.2.1 Additive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 5.2.2 Bi-Tanh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 5.2.3 LSTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 5.2.4 BiLSTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 6 Experiments And Tests 20 6.1 Corpora . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 6.2 Models Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 6.3 Hardware and Software Used For Tests . . . . . . . . . . . . . . . . . . . . 22 6.4 Representation learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 6.5 Paraphrase Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 6.5.1 Paraphrase Test Results Discussion . . . . . . . . . . . . . . . . . . 25 6.6 Cross-Lingual Document Classification (CLDC) Tests . . . . . . . . . . . . 26 6.6.1 CLDC Test Results Discussion . . . . . . . . . . . . . . . . . . . . 32 7 Summary And Conclusion 34 Bibliography 3

    人の行動分類のための教師なし転移学習

    Get PDF
    筑波大学 (University of Tsukuba)201
    corecore