1,719 research outputs found

    A Factoid Question Answering System for Vietnamese

    Full text link
    In this paper, we describe the development of an end-to-end factoid question answering system for the Vietnamese language. This system combines both statistical models and ontology-based methods in a chain of processing modules to provide high-quality mappings from natural language text to entities. We present the challenges in the development of such an intelligent user interface for an isolating language like Vietnamese and show that techniques developed for inflectional languages cannot be applied "as is". Our question answering system can answer a wide range of general knowledge questions with promising accuracy on a test set.Comment: In the proceedings of the HQA'18 workshop, The Web Conference Companion, Lyon, Franc

    A Model of Vietnamese Person Named Entity

    Get PDF

    A Robust Transformation-Based Learning Approach Using Ripple Down Rules for Part-of-Speech Tagging

    Full text link
    In this paper, we propose a new approach to construct a system of transformation rules for the Part-of-Speech (POS) tagging task. Our approach is based on an incremental knowledge acquisition method where rules are stored in an exception structure and new rules are only added to correct the errors of existing rules; thus allowing systematic control of the interaction between the rules. Experimental results on 13 languages show that our approach is fast in terms of training time and tagging speed. Furthermore, our approach obtains very competitive accuracy in comparison to state-of-the-art POS and morphological taggers.Comment: Version 1: 13 pages. Version 2: Submitted to AI Communications - the European Journal on Artificial Intelligence. Version 3: Resubmitted after major revisions. Version 4: Resubmitted after minor revisions. Version 5: to appear in AI Communications (accepted for publication on 3/12/2015

    PHÂN LỚP VĂN BẢN DỰA TRÊN SUPPORT VECTOR MACHINE

    Get PDF
    The development of the Internet has increased the need for daily online information storage. Finding the correct information that we are interested in takes a lot of time, so the use of techniques for organizing and processing text data are needed. These techniques are called text classification or text categorization. There are many methods of text classification, but for this paper we study and apply the Support Vector Machine (SVM) method and compare its effect with the Naïve Bayes probability method. In addition, before implementing text classification, we performed preprocessing steps on the training set by extracting keywords with dimensional reduction techniques to reduce the time needed in the classification process.Sự phát triển của Internet làm cho thông tin lưu trữ trực tuyến hàng ngày gia tăng nhanh chóng. Do vậy, để tìm đúng thông tin mà chúng ta cần quan tâm thì mất khá nhiều thời gian nên cần phải dùng những kỹ thuật tổ chức và xử lý dữ liệu về văn bản. Kỹ thuật này được gọi là phân lớp văn bản hay nói cách khác là phân loại văn bản. Đã có rất nhiều phương pháp nghiên cứu về phân loại văn bản nhưng trong bài viết này chúng tôi tìm hiểu và áp dụng phương pháp Support Vector Machine và so sánh hiệu quả của nó với phương pháp phân loại theo xác suất Naïve Bayes. Ngoài ra, trước khi thực hiện phân lớp chúng tôi thực hiện các bước tiền xử lý bằng cách trích xuất các từ khóa đặc trưng với kỹ thuật giảm chiều tập huấn luyện nhằm làm giảm thời gian trong quá trình phân lớp

    Ripple Down Rules for Question Answering

    Full text link
    Recent years have witnessed a new trend of building ontology-based question answering systems. These systems use semantic web information to produce more precise answers to users' queries. However, these systems are mostly designed for English. In this paper, we introduce an ontology-based question answering system named KbQAS which, to the best of our knowledge, is the first one made for Vietnamese. KbQAS employs our question analysis approach that systematically constructs a knowledge base of grammar rules to convert each input question into an intermediate representation element. KbQAS then takes the intermediate representation element with respect to a target ontology and applies concept-matching techniques to return an answer. On a wide range of Vietnamese questions, experimental results show that the performance of KbQAS is promising with accuracies of 84.1% and 82.4% for analyzing input questions and retrieving output answers, respectively. Furthermore, our question analysis approach can easily be applied to new domains and new languages, thus saving time and human effort.Comment: V1: 21 pages, 7 figures, 10 tables. V2: 8 figures, 10 tables; shorten section 2; change sections 4.3 and 5.1.2. V3: Accepted for publication in the Semantic Web journal. V4 (Author's manuscript): camera ready version, available from the Semantic Web journal at http://www.semantic-web-journal.ne

    Apply deep learning to improve the question analysis model in the Vietnamese question answering system

    Get PDF
    Question answering (QA) system nowadays is quite popular for automated answering purposes, the meaning analysis of the question plays an important role, directly affecting the accuracy of the system. In this article, we propose an improvement for question-answering models by adding more specific question analysis steps, including contextual characteristic analysis, pos-tag analysis, and question-type analysis built on deep learning network architecture. Weights of extracted words through question analysis steps are combined with the best matching 25 (BM25) algorithm to find the best relevant paragraph of text and incorporated into the QA model to find the best and least noisy answer. The dataset for the question analysis step consists of 19,339 labeled questions covering a variety of topics. Results of the question analysis model are combined to train the question-answering model on the data set related to the learning regulations of Industrial University of Ho Chi Minh City. It includes 17,405 pairs of questions and answers for the training set and 1,600 pairs for the test set, where the robustly optimized BERT pre-training approach (RoBERTa) model has an F1-score accuracy of 74%. The model has improved significantly. For long and complex questions, the mode has extracted weights and correctly provided answers based on the question’s contents
    corecore