457,108 research outputs found

    Using ontology in query answering systems: Scenarios, requirements and challenges

    Get PDF
    Equipped with the ultimate query answering system, computers would finally be in a position to address all our information needs in a natural way. In this paper, we describe how Language and Computing nv (L&C), a developer of ontology-based natural language understanding systems for the healthcare domain, is working towards the ultimate Question Answering (QA) System for healthcare workers. L&C’s company strategy in this area is to design in a step-by-step fashion the essential components of such a system, each component being designed to solve some one part of the total problem and at the same time reflect well-defined needs on the prat of our customers. We compare our strategy with the research roadmap proposed by the Question Answering Committee of the National Institute of Standards and Technology (NIST), paying special attention to the role of ontology

    Human Mobility Question Answering (Vision Paper)

    Full text link
    Question answering (QA) systems have attracted much attention from the artificial intelligence community as they can learn to answer questions based on the given knowledge source (e.g., images in visual question answering). However, the research into question answering systems with human mobility data remains unexplored. Mining human mobility data is crucial for various applications such as smart city planning, pandemic management, and personalised recommendation system. In this paper, we aim to tackle this gap and introduce a novel task, that is, human mobility question answering (MobQA). The aim of the task is to let the intelligent system learn from mobility data and answer related questions. This task presents a new paradigm change in mobility prediction research and further facilitates the research of human mobility recommendation systems. To better support this novel research topic, this vision paper also proposes an initial design of the dataset and a potential deep learning model framework for the introduced MobQA task. We hope that this paper will provide novel insights and open new directions in human mobility research and question answering research

    An ontology for clinical questions about the contents of patient notes

    Get PDF
    AbstractObjectiveMany studies have been completed on question classification in the open domain, however only limited work focuses on the medical domain. As well, to the best of our knowledge, most of these medical question classifications were designed for literature based question and answering systems. This paper focuses on a new direction, which is to design a novel question processing and classification model for answering clinical questions applied to electronic patient notes.MethodsThere are four main steps in the work. Firstly, a relatively large set of clinical questions was collected from staff in an Intensive Care Unit. Then, a clinical question taxonomy was designed for question and answering purposes. Subsequently an annotation guideline was created and used to annotate the question set. Finally, a multilayer classification model was built to classify the clinical questions.ResultsThrough the initial classification experiments, we realized that the general features cannot contribute to high performance of a minimum classifier (a small data set with multiple classes). Thus, an automatic knowledge discovery and knowledge reuse process was designed to boost the performance by extracting and expanding the specific features of the questions. In the evaluation, the results show around 90% accuracy can be achieved in the answerable subclass classification and generic question templates classification. On the other hand, the machine learning method does not perform well at identifying the category of unanswerable questions, due to the asymmetric distribution.ConclusionsIn this paper, a comprehensive study on clinical questions has been completed. A major outcome of this work is the multilayer classification model. It serves as a major component of a patient records based clinical question and answering system as our studies continue. As well, the question collections can be reused by the research community to improve the efficiency of their own question and answering systems

    Classifier combination approach for question classification for Bengali question answering system

    Full text link
    [EN] Question classification (QC) is a prime constituent of an automated question answering system. The work presented here demonstrates that a combination of multiple models achieves better classification performance than those obtained with existing individual models for the QC task in Bengali. We have exploited state-of-the-art multiple model combination techniques, i.e., ensemble, stacking and voting, to increase QC accuracy. Lexical, syntactic and semantic features of Bengali questions are used for four well-known classifiers, namely Naive Bayes, kernel Naive Bayes, Rule Induction and Decision Tree, which serve as our base learners. Single-layer question-class taxonomy with 8 coarse-grained classes is extended to two-layer taxonomy by adding 69 fine-grained classes. We carried out the experiments both on single-layer and two-layer taxonomies. Experimental results confirmed that classifier combination approaches outperform single-classifier classification approaches by 4.02% for coarse-grained question classes. Overall, the stacking approach produces the best results for fine-grained classification and achieves 87.79% of accuracy. The approach presented here could be used in other Indo-Aryan or Indic languages to develop a question answering system.Somnath Banerjee and Sudip Kumar Naskar are supported by Digital India Corporation (formerly Media Lab Asia), MeitY, Government of India, under the Visvesvaraya Ph.D. Scheme for Electronics and IT. The work of Paolo Rosso was partially funded by the Spanish MICINN under the research project PGC2018-096212-B-C31.Banerjee, S.; Kumar Naskar, S.; Rosso, P.; Bndyopadhyay, S. (2019). Classifier combination approach for question classification for Bengali question answering system. Sadhana. 44(12):1-14. https://doi.org/10.1007/s12046-019-1224-81144412Jurafsky D and Martin J H 2014 Speech and language processing. Pearson, LondonMartin J H and Jurafsky D 2000 Speech and language processing, international edition 710Voorhees E M 2002 Overview of the TREC 2001 question answering track. NIST Special Publication, pp. 42–51Hovy E, Gerber L, Hermjakob U, Lin C Y and Ravichandran D 2001 Toward semantics-based answer pinpointing. In: Proceedings of Human Language Technology Research, ACL, pp. 1–7Ittycheriah A, Franz M, Zhu W J, Ratnaparkhi A and Mammone R J 2000 IBM’s statistical question answering system. In: Proceedings of TRECMoldovan D, PaƟca M, Harabagiu S and Surdeanu M 2003 Performance issues and error analysis in an open-domain question answering system. ACM Trans. Inf. Syst. 21(2): 133–154Banerjee S and Bandyopadhyay S 2012 Bengali question classification: towards developing QA system. In: Proceedings of the 3rd Workshop on South and Sotheast Asian Language Processing (SANLP), COLING, pp. 25–40Loni B 2011 A survey of state-of-the-art methods on question classification. Technical Report, Delft University of TechnologyHull D A 1999 Xerox TREC-8 question answering track report. In: Proceedings of TRECPrager J, Radev D, Brown E, Coden A and Samn V 1999 The use of predictive annotation for question answering in TREC8. Inf. Retr. 1(3): 4Moschitti A, Quarteroni S, Basili R and Manandhar S 2007 Exploiting syntactic and shallow semantic kernels for question answer classification. In: Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, p. 776Zhang D and Lee W S 2003 Question classification using support vector machines. In: Proceedings of Research and Development in Informaion Retrieval, ACM, pp. 26–32Huang Z, Thint M and Qin Z 2008 Question classification using head words and their hypernyms. In: Proceedings of Empirical Methods in Natural Language Processing, ACL, pp. 927–936Silva J, Coheur L, Mendes A C and Wichert A 2011 From symbolic to sub-symbolic information in question classification. Artif. Intell. Rev. 35(2): 137–154Li X and Roth D 2006 Learning question classifiers: the role of semantic information. Nat. Lang. Eng. 12(03): 229–249McCallum A, Freitag D and Pereira F C N 2000 Maximum entropy markov models for information extraction and segmentation. In: Proceedings of the International Conference on Machine Learning (ICML), vol. 17, pp. 591–598Cortes C and Vapnik V 1995 Support-vector networks. Mach. Learn. 20(3): 273–297Breiman L 1996 Bagging predictors. Mach. Learn. 24(2): 123–140Clemen R T 1989 Combining forecasts: a review and annotated bibliography. Int. J. Forecast. 5(4): 559–583Perrone M P 1993 Improving regression estimation: averaging methods for variance reduction with extensions to general convex measure optimization. Ph.D. Thesis, Brown UniversityWolpert D H 1992 Stacked generalization. Neural Netw. 5(2): 241–259Hansen L K and Salamon P 1990 Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12: 993–1001Krogh A, Vedelsby J et al 1995 Neural network ensembles, cross validation, and active learning. Adv. Neural Inf. Process. Syst. 7: 231–238Hashem S 1997 Optimal linear combinations of neural networks. Neural Netw. 10(4): 599–614Opitz D W and Shavlik J W 1996 Actively searching for an effective neural network ensemble. Connect. Sci. 8(3–4): 337–354Opitz D W and Shavlik J W 1996 Generating accurate and diverse members of a neural-network ensemble. In: Advances in neural information processing systems, pp. 535–541Xin L, Huang X J and Wu L 2006 Question classification by ensemble learning. Int. J. Comput. Sci. Netw. Secur. 6(3): 147Schapire R E 1990 The strength of weak learnability. Mach. Learn. 5(2): 197–227Brill E 1995 Transformation-based error-driven learning and natural language processing: a case study in part-of-speech tagging. Comput. Linguist. 21(4): 543–565Jia K, Chen K, Fan X and Zhang Y 2007 Chinese question classification based on ensemble learning. In: Proceedings of ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, SNPD 2007. IEEE, vol. 3, pp. 342–347Su L, Liao H, Yu Z and Zhao Q 2009 Ensemble learning for question classification. In: Proceedings of Intelligent Computing and Intelligent Systems, ICIS. IEEE, pp. 501–505Ferrucci D, Brown E, Chu-Carroll J, Fan J et al 2010 Building Watson: an overview of the DeepQA project. AI Mag. 31(3): 59–79PĂ©rez-Coutiño M A, Montes-y-GĂłmez M, LĂłpez-LĂłpez A and Villaseñor-Pineda L 2005 Experiments for tuning the values of lexical features in question answering for Spanish. In: CLEF Working NotesNeumann G and Sacaleanu B 2003 A cross-language question/answering system for German and English. In: Proceedings of the Workshop of the Cross-Language Evaluation Forum for European Languages, pp. 559–571Blunsom P, Kocik K and Curran J R 2006 Question classification with log-linear models. In: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, pp. 615–616Rosso P, Benajiba Y and Lyhyaoui A 2006 In: Proceedings of the 4th Conference on Scientific Research Outlook and Technology Development in the Arab World, pp. 11–14Abouenour L, Bouzoubaa K and Rosso P 2012 IDRAAQ: new Arabic question answering system based on query expansion and passage retrieval. In: Proceedings of CELCTSakai T, Saito Y, Ichimura Y, Koyama M, Kokubu T and Manabe T 2004 ASKMi: a Japanese question answering system based on semantic role analysis. In: Proceedings of Coupling Approaches, Coupling Media and Coupling Languages for Information Retrieval, pp. 215–231Isozaki H, Sudoh K and Tsukada H 2005 NTT’s Japanese–English cross-language question answering system. In: Proceedings of NTCIRYongkui Z, Zheqian Z, Lijun B and Xinqing C 2003 Internet-based Chinese question-answering system. Comput. Eng. 15: 34Sun A, Jiang M, He Y, Chen L and Yuan B 2008 Chinese question answering based on syntax analysis and answer classification. Acta Electron. Sin. 36(5): 833–839Sahu S, Vasnik N and Roy D 2012 Prashnottar: a Hindi question answering system. Int. J. Comput. Sci. Inf. Technol. 4(2): 149Nanda G, Dua M and Singla K 2016 A Hindi question answering system using machine learning approach. In: Proceedings of Computational Techniques in Information and Communication Technologies (ICCTICT). IEEE, pp. 311–314Sekine S and Grishman R 2003 Hindi–English cross-lingual question-answering system. ACM Trans. Asian Lang. Inf. Process. 2(3): 181–192Shukla P, Mukherjee A and Raina A 2004 Towards a language independent encoding of documents. In: Proceedings of NLUCS 2004, p. 116Ray S K, Ahmad A and Shaalan K 2018 A review of the state of the art in Hindi question answering systems. In: Proceedings of Intelligent Natural Language Processing: Trends and Applications, pp. 265–292Kumar P, Kashyap S, Mittal A and Gupta S 2003 A query answering system for e-learning Hindi documents. South Asian Lang. Rev. 13(1–2): 69–81Reddy R, Reddy N and Bandyopadhyay S 2006 Dialogue based question answering system in Telugu. In: Proceedings of the Workshop on Multilingual Question Answering, pp. 53–60Dhanjal G S, Sharma S and Sarao P K 2016 Gravity based Punjabi question answering system. Int. J. Comput. Appl. 147(3): 30–35Bindu M S and Mary I S 2012 Design and development of a named entity based question answering system for Malayalam language. Ph.D. Thesis, Cochin University of Science and TechnologyLee C W et al 2005 ASQA: academia sinica question answering system for NTCIR-5 CLQA. In: Proceedings of the NTCIR-5 Workshop, pp. 202–208Banerjee S and Bandyopadhyay S 2013 Ensemble approach for fine-grained question classification in Bengali. In: Proceedings of the 27th Pacific–Asia Conference on Language, Information, and Computation (PACLIC-27), pp. 75–84Loni B, Van Tulder G, Wiggers P, Tax D M J and Loog M 2011 Question classification by weighted combination of lexical, syntactic and semantic features. In: Proceedings of the International Conference on Text, Speech, and Dialogue, pp. 243–250Huang Z, Thint M and Celikyilmaz A 2009 Investigation of question classifier in question answering. In: Proceedings of Empirical Methods in Natural Language Processing. ACL, vol. 2, pp. 543–550Blunsom P, Kocik K and Curran J R 2006 Question classification with log-linear models. In: Proceedings of ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, pp. 615–616Diwakar S, Goyal P and Gupta R 2010 Transliteration among indian languages using WX notation. In: Proceedings of the Conference on Natural Language Processing, EPFL-CONF-168805. Saarland University Press, pp. 147–150Banerjee S, Naskar S K and Bandyopadhyay S Bengali named entity recognition using margin infused relaxed algorithm. In: Proceedings of the International Conference on Text, Speech, and Dialogue, pp. 125–132Li X and Roth D Learning question classifiers. In: Proceedings of the 19th International Conference on Computational Linguistics, ACL, vol. 1, pp. 1–7Cohen J 1960 A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 20(1): 37–46Schapire R E 1990 The strength of weak learnability. Mach. Learn. 5(2): 197–22

    Conversational Exploratory Search via Interactive Storytelling

    Get PDF
    Conversational interfaces are likely to become more efficient, intuitive and engaging way for human-computer interaction than today's text or touch-based interfaces. Current research efforts concerning conversational interfaces focus primarily on question answering functionality, thereby neglecting support for search activities beyond targeted information lookup. Users engage in exploratory search when they are unfamiliar with the domain of their goal, unsure about the ways to achieve their goals, or unsure about their goals in the first place. Exploratory search is often supported by approaches from information visualization. However, such approaches cannot be directly translated to the setting of conversational search. In this paper we investigate the affordances of interactive storytelling as a tool to enable exploratory search within the framework of a conversational interface. Interactive storytelling provides a way to navigate a document collection in the pace and order a user prefers. In our vision, interactive storytelling is to be coupled with a dialogue-based system that provides verbal explanations and responsive design. We discuss challenges and sketch the research agenda required to put this vision into life.Comment: Accepted at ICTIR'17 Workshop on Search-Oriented Conversational AI (SCAI 2017

    Comprehension and retrieval of failure cases in airborne observatories

    Get PDF
    This paper describes research dealing with the computational problem of analyzing and repairing failures of electronic and mechanical systems of telescopes in NASA's airborne observatories, such as KAO (Kuiper Airborne Observatory) and SOFIA (Stratospheric Observatory for Infrared Astronomy). The research has resulted in the development of an experimental system that acquires knowledge of failure analysis from input text, and answers questions regarding failure detection and correction. The system's design builds upon previous work on text comprehension and question answering, including: knowledge representation for conceptual analysis of failure descriptions, strategies for mapping natural language into conceptual representations, case-based reasoning strategies for memory organization and indexing, and strategies for memory search and retrieval. These techniques have been combined into a model that accounts for: (a) how to build a knowledge base of system failures and repair procedures from descriptions that appear in telescope-operators' logbooks and FMEA (failure modes and effects analysis) manuals; and (b) how to use that knowledge base to search and retrieve answers to questions about causes and effects of failures, as well as diagnosis and repair procedures. This model has been implemented in FANSYS (Failure ANalysis SYStem), a prototype text comprehension and question answering program for failure analysis

    Is Summary Useful or Not? An Extrinsic Human Evaluation of Text Summaries on Downstream Tasks

    Full text link
    Research on automated text summarization relies heavily on human and automatic evaluation. While recent work on human evaluation mainly adopted intrinsic evaluation methods, judging the generic quality of text summaries, e.g. informativeness and coherence, our work focuses on evaluating the usefulness of text summaries with extrinsic methods. We carefully design three different downstream tasks for extrinsic human evaluation of summaries, i.e., question answering, text classification and text similarity assessment. We carry out experiments using system rankings and user behavior data to evaluate the performance of different summarization models. We find summaries are particularly useful in tasks that rely on an overall judgment of the text, while being less effective for question answering tasks. The results show that summaries generated by fine-tuned models lead to higher consistency in usefulness across all three tasks, as rankings of fine-tuned summarization systems are close across downstream tasks according to the proposed extrinsic metrics. Summaries generated by models in the zero-shot setting, however, are found to be biased towards the text classification and similarity assessment tasks, due to its general and less detailed summary style. We further evaluate the correlation of 14 intrinsic automatic metrics with human criteria and show that intrinsic automatic metrics perform well in evaluating the usefulness of summaries in the question-answering task, but are less effective in the other two tasks. This highlights the limitations of relying solely on intrinsic automatic metrics in evaluating the performance and usefulness of summaries

    Where was COVID-19 first discovered? Designing a question-answering system for pandemic situations

    Get PDF
    The COVID-19 pandemic is accompanied by a massive “infodemic” that makes it hard to identify concise and credible information for COVID-19-related questions, like incubation time, infection rates, or the effectiveness of vaccines. As a novel solution, our paper is concerned with designing a question-answering system based on modern technologies from natural language processing to overcome information overload and misinformation in pandemic situations. To carry out our research, we followed a design science research approach and applied Ingwersen’s cognitive model of information retrieval interaction to inform our design process from a socio-technical lens. On this basis, we derived prescriptive design knowledge in terms of design requirements and design principles, which we translated into the construction of a prototypical instantiation. Our implementation is based on the comprehensive CORD-19 dataset, and we demonstrate our artifact’s usefulness by evaluating its answer quality based on a sample of COVID-19 questions labeled by biomedical experts

    How will smart city production systems transform supply chain design: a product-level investigation

    Get PDF
    © 2016 Informa UK Limited, trading as Taylor & Francis Group.This paper is a first step to understand the role that a smart city with a distributed production system could have in changing the nature and form of supply chain design. Since the end of the Second World War, most supply chain systems for manufactured products have been based on ‘scale economies’ and ‘bigness’; in our paper we challenge this traditional view. Our fundamental research question is: how could a smart city production system change supply chain design? In answering this question, we develop an integrative framework for understanding the interplay between smart city technological initiatives (big data analytics, the industrial Internet of things) and distributed manufacturing on supply chain design. This framework illustrates synergies between manufacturing and integrative technologies within the smart city context and links with supply chain design. Considering that smart cities are based on the collaboration between firms, end-users and local stakeholders, we advance the present knowledge on production systems through case-study findings at the product level. In the conclusion, we stress there is a need for future research to empirically develop our work further and measure (beyond the product level) the extent to which new production technologies such as distributed manufacturing are indeed democratising supply chain design and transforming manufacturing from ‘global production’ to a future ‘city-oriented’ social materiality
    • 

    corecore