36,214 research outputs found

    Learning Commonsense Knowledge Through Interactive Dialogue

    Get PDF
    One of the most difficult problems in Artificial Intelligence is related to acquiring commonsense knowledge - to create a collection of facts and information that an ordinary person should know. In this work, we present a system that, from a limited background knowledge, is able to learn to form simple concepts through interactive dialogue with a user. We approach the problem using a syntactic parser, along with a mechanism to check for synonymy, to translate sentences into logical formulas represented in Event Calculus using Answer Set Programming (ASP). Reasoning and learning tasks are then automatically generated for the translated text, with learning being initiated through question and answering. The system is capable of learning with no contextual knowledge prior to the dialogue. The system has been evaluated on stories inspired by the Facebook\u27s bAbI\u27s question-answering tasks, and through appropriate question and answering is able to respond accurately to these dialogues

    Using dependency parsing and machine learning for factoid question answering on spoken documents

    Get PDF
    This paper presents our experiments in question answering for speech corpora. These experiments focus on improving the answer extraction step of the QA process. We present two approaches to answer extraction in question answering for speech corpora that apply machine learning to improve the coverage and precision of the extraction. The first one is a reranker that uses only lexical information, the second one uses dependency parsing to score robust similarity between syntactic structures. Our experimental results show that the proposed learning models improve our previous results using only hand-made ranking rules with small syntactic information. Moreover, this results show also that a dependency parser can be useful for speech transcripts even if it was trained with written text data from a news collection. We evaluate the system on manual transcripts of speech from EPPS English corpus and a set of questions transcribed from spontaneous oral questions. This data belongs to the CLEF 2009 track on QA on speech transcripts (QAst).Peer ReviewedPostprint (author’s final draft

    Rapport : a fact-based question answering system for portuguese

    Get PDF
    Question answering is one of the longest-standing problems in natural language processing. Although natural language interfaces for computer systems can be considered more common these days, the same still does not happen regarding access to specific textual information. Any full text search engine can easily retrieve documents containing user specified or closely related terms, however it is typically unable to answer user questions with small passages or short answers. The problem with question answering is that text is hard to process, due to its syntactic structure and, to a higher degree, to its semantic contents. At the sentence level, although the syntactic aspects of natural language have well known rules, the size and complexity of a sentence may make it difficult to analyze its structure. Furthermore, semantic aspects are still arduous to address, with text ambiguity being one of the hardest tasks to handle. There is also the need to correctly process the question in order to define its target, and then select and process the answers found in a text. Additionally, the selected text that may yield the answer to a given question must be further processed in order to present just a passage instead of the full text. These issues take also longer to address in languages other than English, as is the case of Portuguese, that have a lot less people working on them. This work focuses on question answering for Portuguese. In other words, our field of interest is in the presentation of short answers, passages, and possibly full sentences, but not whole documents, to questions formulated using natural language. For that purpose, we have developed a system, RAPPORT, built upon the use of open information extraction techniques for extracting triples, so called facts, characterizing information on text files, and then storing and using them for answering user queries done in natural language. These facts, in the form of subject, predicate and object, alongside other metadata, constitute the basis of the answers presented by the system. Facts work both by storing short and direct information found in a text, typically entity related information, and by containing in themselves the answers to the questions already in the form of small passages. As for the results, although there is margin for improvement, they are a tangible proof of the adequacy of our approach and its different modules for storing information and retrieving answers in question answering systems. In the process, in addition to contributing with a new approach to question answering for Portuguese, and validating the application of open information extraction to question answering, we have developed a set of tools that has been used in other natural language processing related works, such as is the case of a lemmatizer, LEMPORT, which was built from scratch, and has a high accuracy. Many of these tools result from the improvement of those found in the Apache OpenNLP toolkit, by pre-processing their input, post-processing their output, or both, and by training models for use in those tools or other, such as MaltParser. Other tools include the creation of interfaces for other resources containing, for example, synonyms, hypernyms, hyponyms, or the creation of lists of, for instance, relations between verbs and agents, using rules

    Classifier combination approach for question classification for Bengali question answering system

    Full text link
    [EN] Question classification (QC) is a prime constituent of an automated question answering system. The work presented here demonstrates that a combination of multiple models achieves better classification performance than those obtained with existing individual models for the QC task in Bengali. We have exploited state-of-the-art multiple model combination techniques, i.e., ensemble, stacking and voting, to increase QC accuracy. Lexical, syntactic and semantic features of Bengali questions are used for four well-known classifiers, namely Naive Bayes, kernel Naive Bayes, Rule Induction and Decision Tree, which serve as our base learners. Single-layer question-class taxonomy with 8 coarse-grained classes is extended to two-layer taxonomy by adding 69 fine-grained classes. We carried out the experiments both on single-layer and two-layer taxonomies. Experimental results confirmed that classifier combination approaches outperform single-classifier classification approaches by 4.02% for coarse-grained question classes. Overall, the stacking approach produces the best results for fine-grained classification and achieves 87.79% of accuracy. The approach presented here could be used in other Indo-Aryan or Indic languages to develop a question answering system.Somnath Banerjee and Sudip Kumar Naskar are supported by Digital India Corporation (formerly Media Lab Asia), MeitY, Government of India, under the Visvesvaraya Ph.D. Scheme for Electronics and IT. The work of Paolo Rosso was partially funded by the Spanish MICINN under the research project PGC2018-096212-B-C31.Banerjee, S.; Kumar Naskar, S.; Rosso, P.; Bndyopadhyay, S. (2019). Classifier combination approach for question classification for Bengali question answering system. Sadhana. 44(12):1-14. https://doi.org/10.1007/s12046-019-1224-81144412Jurafsky D and Martin J H 2014 Speech and language processing. Pearson, LondonMartin J H and Jurafsky D 2000 Speech and language processing, international edition 710Voorhees E M 2002 Overview of the TREC 2001 question answering track. NIST Special Publication, pp. 42–51Hovy E, Gerber L, Hermjakob U, Lin C Y and Ravichandran D 2001 Toward semantics-based answer pinpointing. In: Proceedings of Human Language Technology Research, ACL, pp. 1–7Ittycheriah A, Franz M, Zhu W J, Ratnaparkhi A and Mammone R J 2000 IBM’s statistical question answering system. In: Proceedings of TRECMoldovan D, Paşca M, Harabagiu S and Surdeanu M 2003 Performance issues and error analysis in an open-domain question answering system. ACM Trans. Inf. Syst. 21(2): 133–154Banerjee S and Bandyopadhyay S 2012 Bengali question classification: towards developing QA system. In: Proceedings of the 3rd Workshop on South and Sotheast Asian Language Processing (SANLP), COLING, pp. 25–40Loni B 2011 A survey of state-of-the-art methods on question classification. Technical Report, Delft University of TechnologyHull D A 1999 Xerox TREC-8 question answering track report. In: Proceedings of TRECPrager J, Radev D, Brown E, Coden A and Samn V 1999 The use of predictive annotation for question answering in TREC8. Inf. Retr. 1(3): 4Moschitti A, Quarteroni S, Basili R and Manandhar S 2007 Exploiting syntactic and shallow semantic kernels for question answer classification. In: Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, p. 776Zhang D and Lee W S 2003 Question classification using support vector machines. In: Proceedings of Research and Development in Informaion Retrieval, ACM, pp. 26–32Huang Z, Thint M and Qin Z 2008 Question classification using head words and their hypernyms. In: Proceedings of Empirical Methods in Natural Language Processing, ACL, pp. 927–936Silva J, Coheur L, Mendes A C and Wichert A 2011 From symbolic to sub-symbolic information in question classification. Artif. Intell. Rev. 35(2): 137–154Li X and Roth D 2006 Learning question classifiers: the role of semantic information. Nat. Lang. Eng. 12(03): 229–249McCallum A, Freitag D and Pereira F C N 2000 Maximum entropy markov models for information extraction and segmentation. In: Proceedings of the International Conference on Machine Learning (ICML), vol. 17, pp. 591–598Cortes C and Vapnik V 1995 Support-vector networks. Mach. Learn. 20(3): 273–297Breiman L 1996 Bagging predictors. Mach. Learn. 24(2): 123–140Clemen R T 1989 Combining forecasts: a review and annotated bibliography. Int. J. Forecast. 5(4): 559–583Perrone M P 1993 Improving regression estimation: averaging methods for variance reduction with extensions to general convex measure optimization. Ph.D. Thesis, Brown UniversityWolpert D H 1992 Stacked generalization. Neural Netw. 5(2): 241–259Hansen L K and Salamon P 1990 Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12: 993–1001Krogh A, Vedelsby J et al 1995 Neural network ensembles, cross validation, and active learning. Adv. Neural Inf. Process. Syst. 7: 231–238Hashem S 1997 Optimal linear combinations of neural networks. Neural Netw. 10(4): 599–614Opitz D W and Shavlik J W 1996 Actively searching for an effective neural network ensemble. Connect. Sci. 8(3–4): 337–354Opitz D W and Shavlik J W 1996 Generating accurate and diverse members of a neural-network ensemble. In: Advances in neural information processing systems, pp. 535–541Xin L, Huang X J and Wu L 2006 Question classification by ensemble learning. Int. J. Comput. Sci. Netw. Secur. 6(3): 147Schapire R E 1990 The strength of weak learnability. Mach. Learn. 5(2): 197–227Brill E 1995 Transformation-based error-driven learning and natural language processing: a case study in part-of-speech tagging. Comput. Linguist. 21(4): 543–565Jia K, Chen K, Fan X and Zhang Y 2007 Chinese question classification based on ensemble learning. In: Proceedings of ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, SNPD 2007. IEEE, vol. 3, pp. 342–347Su L, Liao H, Yu Z and Zhao Q 2009 Ensemble learning for question classification. In: Proceedings of Intelligent Computing and Intelligent Systems, ICIS. IEEE, pp. 501–505Ferrucci D, Brown E, Chu-Carroll J, Fan J et al 2010 Building Watson: an overview of the DeepQA project. AI Mag. 31(3): 59–79Pérez-Coutiño M A, Montes-y-Gómez M, López-López A and Villaseñor-Pineda L 2005 Experiments for tuning the values of lexical features in question answering for Spanish. In: CLEF Working NotesNeumann G and Sacaleanu B 2003 A cross-language question/answering system for German and English. In: Proceedings of the Workshop of the Cross-Language Evaluation Forum for European Languages, pp. 559–571Blunsom P, Kocik K and Curran J R 2006 Question classification with log-linear models. In: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, pp. 615–616Rosso P, Benajiba Y and Lyhyaoui A 2006 In: Proceedings of the 4th Conference on Scientific Research Outlook and Technology Development in the Arab World, pp. 11–14Abouenour L, Bouzoubaa K and Rosso P 2012 IDRAAQ: new Arabic question answering system based on query expansion and passage retrieval. In: Proceedings of CELCTSakai T, Saito Y, Ichimura Y, Koyama M, Kokubu T and Manabe T 2004 ASKMi: a Japanese question answering system based on semantic role analysis. In: Proceedings of Coupling Approaches, Coupling Media and Coupling Languages for Information Retrieval, pp. 215–231Isozaki H, Sudoh K and Tsukada H 2005 NTT’s Japanese–English cross-language question answering system. In: Proceedings of NTCIRYongkui Z, Zheqian Z, Lijun B and Xinqing C 2003 Internet-based Chinese question-answering system. Comput. Eng. 15: 34Sun A, Jiang M, He Y, Chen L and Yuan B 2008 Chinese question answering based on syntax analysis and answer classification. Acta Electron. Sin. 36(5): 833–839Sahu S, Vasnik N and Roy D 2012 Prashnottar: a Hindi question answering system. Int. J. Comput. Sci. Inf. Technol. 4(2): 149Nanda G, Dua M and Singla K 2016 A Hindi question answering system using machine learning approach. In: Proceedings of Computational Techniques in Information and Communication Technologies (ICCTICT). IEEE, pp. 311–314Sekine S and Grishman R 2003 Hindi–English cross-lingual question-answering system. ACM Trans. Asian Lang. Inf. Process. 2(3): 181–192Shukla P, Mukherjee A and Raina A 2004 Towards a language independent encoding of documents. In: Proceedings of NLUCS 2004, p. 116Ray S K, Ahmad A and Shaalan K 2018 A review of the state of the art in Hindi question answering systems. In: Proceedings of Intelligent Natural Language Processing: Trends and Applications, pp. 265–292Kumar P, Kashyap S, Mittal A and Gupta S 2003 A query answering system for e-learning Hindi documents. South Asian Lang. Rev. 13(1–2): 69–81Reddy R, Reddy N and Bandyopadhyay S 2006 Dialogue based question answering system in Telugu. In: Proceedings of the Workshop on Multilingual Question Answering, pp. 53–60Dhanjal G S, Sharma S and Sarao P K 2016 Gravity based Punjabi question answering system. Int. J. Comput. Appl. 147(3): 30–35Bindu M S and Mary I S 2012 Design and development of a named entity based question answering system for Malayalam language. Ph.D. Thesis, Cochin University of Science and TechnologyLee C W et al 2005 ASQA: academia sinica question answering system for NTCIR-5 CLQA. In: Proceedings of the NTCIR-5 Workshop, pp. 202–208Banerjee S and Bandyopadhyay S 2013 Ensemble approach for fine-grained question classification in Bengali. In: Proceedings of the 27th Pacific–Asia Conference on Language, Information, and Computation (PACLIC-27), pp. 75–84Loni B, Van Tulder G, Wiggers P, Tax D M J and Loog M 2011 Question classification by weighted combination of lexical, syntactic and semantic features. In: Proceedings of the International Conference on Text, Speech, and Dialogue, pp. 243–250Huang Z, Thint M and Celikyilmaz A 2009 Investigation of question classifier in question answering. In: Proceedings of Empirical Methods in Natural Language Processing. ACL, vol. 2, pp. 543–550Blunsom P, Kocik K and Curran J R 2006 Question classification with log-linear models. In: Proceedings of ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, pp. 615–616Diwakar S, Goyal P and Gupta R 2010 Transliteration among indian languages using WX notation. In: Proceedings of the Conference on Natural Language Processing, EPFL-CONF-168805. Saarland University Press, pp. 147–150Banerjee S, Naskar S K and Bandyopadhyay S Bengali named entity recognition using margin infused relaxed algorithm. In: Proceedings of the International Conference on Text, Speech, and Dialogue, pp. 125–132Li X and Roth D Learning question classifiers. In: Proceedings of the 19th International Conference on Computational Linguistics, ACL, vol. 1, pp. 1–7Cohen J 1960 A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 20(1): 37–46Schapire R E 1990 The strength of weak learnability. Mach. Learn. 5(2): 197–22

    A syntactic candidate ranking method for answering non-copulative questions

    Get PDF
    Question answering (QA) is the act of retrieving answers to questions posed in natural language. It is regarded as requiring more complex natural language processing (NLP) techniques than other types of information retrieval such as document retrieval. QA is sometimes regarded as the next step beyond search engines that ranks the retrieved candidates. Given a set of candidate sentences which contain keywords in common with the question, deciding which one actually answers the question is a challenge in question answering. In this thesis we propose a linguistic method for measuring the syntactic similarity of each candidate sentence to the question. This candidate scoring method uses the question head as an anchor to narrow down the search to a subtree in the parse tree of a candidate sentence (the target subtree). Semantic similarity of the action in the target subtree to the action asked in the question is then measured using WordNet::Similarity on their main verbs. In order to verify the syntactic similarity of this subtree to the question parse tree, syntactic restrictions as well as lexical measures compute the unifiability of critical syntactic participants in them. Finally, the noun phrase that is of the expected answer type in the target subtree is extracted and returned from the best candidate sentence when answering a factoid open domain question. In this thesis, we address both closed and open domain question answering problems. Initially, we propose our syntactic scoring method as a solution for questions in the Telecommunications domain. For our experiments in a closed domain, we build a set of customer service question/answer pairs from Bell Canada's Web pages. We show that the performance of this ranking method depends on the syntactic and lexical similarities in a question/answer pair. We observed that these closed domain questions ask for specific properties, procedures, or conditions about a technical topic. They are sometimes open-ended as well. As a result, detailed understanding of the question and the corpus text is required for answering them. As opposed to closed domain question, however, open domain questions have no restriction on the topic they can ask. The standard test bed for open domain question answering is the question/answer sets provided each year by the NIST organization through the TREC QA conferences. These are factoid questions that ask about a person, date, time, location, etc. Since our method relies on the semantic similarity of the main verbs as well as the syntactic overlap of counterpart subtrees from the question and the target subtrees, it performs well on questions with a main content verb and conventional subject-verb-object syntactic structure. The distribution of this type of questions versus questions having a 'to be' main verb is significantly different in closed versus open domain: around 70% of closed domain questions have a main content verb while more than 67% of open domain questions have a 'to be' main verb. This verb is very flexibility in connecting sentence entities. Therefore, recognizing equivallent syntactic structures between two copula parse trees is very hard. As a result, to better analyze the accuracy of this method, we create a new question categorization based on the question's main verb type: copulative questions ask about a state using a 'to be' verb, while non-copulative questions contain a main non-copula verb indicating an action or event. Our candidate answer ranking method achieves a precision of 47.0% in our closed domain, and 48% in answering the TREC 2003 to 2006 non-copulative questions. For answering open domain factoid questions, we feed the output of Aranea, a competitive question answering system in TREC 2002, to our linguistic method in order to provide it with Web redundancy statistics. This level of performance confirms our hypothesis of the potential usefulness of syntactic mapping for answering questions with a main content verb

    Transition-based Semantic Role Labeling with Pointer Networks

    Get PDF
    Semantic role labeling (SRL) focuses on recognizing the predicate-argument structure of a sentence and plays a critical role in many natural language processing tasks such as machine translation and question answering. Practically all available methods do not perform full SRL, since they rely on pre-identified predicates, and most of them follow a pipeline strategy, using specific models for undertaking one or several SRL subtasks. In addition, previous approaches have a strong dependence on syntactic information to achieve state-of-the-art performance, despite being syntactic trees equally hard to produce. These simplifications and requirements make the majority of SRL systems impractical for real-world applications. In this article, we propose the first transition-based SRL approach that is capable of completely processing an input sentence in a single left-to-right pass, with neither leveraging syntactic information nor resorting to additional modules. Thanks to our implementation based on Pointer Networks, full SRL can be accurately and efficiently done in O(n2)O(n^2), achieving the best performance to date on the majority of languages from the CoNLL-2009 shared task.Comment: Final peer-reviewed manuscript accepted for publication in Knowledge-Based System
    • …
    corecore