27 research outputs found

    Sub-word indexing and blind relevance feedback for English, Bengali, Hindi, and Marathi IR

    Get PDF
    The Forum for Information Retrieval Evaluation (FIRE) provides document collections, topics, and relevance assessments for information retrieval (IR) experiments on Indian languages. Several research questions are explored in this paper: 1. how to create create a simple, languageindependent corpus-based stemmer, 2. how to identify sub-words and which types of sub-words are suitable as indexing units, and 3. how to apply blind relevance feedback on sub-words and how feedback term selection is affected by the type of the indexing unit. More than 140 IR experiments are conducted using the BM25 retrieval model on the topic titles and descriptions (TD) for the FIRE 2008 English, Bengali, Hindi, and Marathi document collections. The major findings are: The corpus-based stemming approach is effective as a knowledge-light term conation step and useful in case of few language-specific resources. For English, the corpusbased stemmer performs nearly as well as the Porter stemmer and significantly better than the baseline of indexing words when combined with query expansion. In combination with blind relevance feedback, it also performs significantly better than the baseline for Bengali and Marathi IR. Sub-words such as consonant-vowel sequences and word prefixes can yield similar or better performance in comparison to word indexing. There is no best performing method for all languages. For English, indexing using the Porter stemmer performs best, for Bengali and Marathi, overlapping 3-grams obtain the best result, and for Hindi, 4-prefixes yield the highest MAP. However, in combination with blind relevance feedback using 10 documents and 20 terms, 6-prefixes for English and 4-prefixes for Bengali, Hindi, and Marathi IR yield the highest MAP. Sub-word identification is a general case of decompounding. It results in one or more index terms for a single word form and increases the number of index terms but decreases their average length. The corresponding retrieval experiments show that relevance feedback on sub-words benefits from selecting a larger number of index terms in comparison with retrieval on word forms. Similarly, selecting the number of relevance feedback terms depending on the ratio of word vocabulary size to sub-word vocabulary size almost always slightly increases information retrieval effectiveness compared to using a fixed number of terms for different languages

    DCU@FIRE2010: term conflation, blind relevance feedback, and cross-language IR with manual and automatic query translation

    Get PDF
    For the first participation of Dublin City University (DCU) in the FIRE 2010 evaluation campaign, information retrieval (IR) experiments on English, Bengali, Hindi, and Marathi documents were performed to investigate term conation (different stemming approaches and indexing word prefixes), blind relevance feedback, and manual and automatic query translation. The experiments are based on BM25 and on language modeling (LM) for IR. Results show that term conation always improves mean average precision (MAP) compared to indexing unprocessed word forms, but different approaches seem to work best for different languages. For example, in monolingual Marathi experiments indexing 5-prefixes outperforms our corpus-based stemmer; in Hindi, the corpus-based stemmer achieves a higher MAP. For Bengali, the LM retrieval model achieves a much higher MAP than BM25 (0.4944 vs. 0.4526). In all experiments using BM25, blind relevance feedback yields considerably higher MAP in comparison to experiments without it. Bilingual IR experiments (English!Bengali and English!Hindi) are based on query translations obtained from native speakers and the Google translate web service. For the automatically translated queries, MAP is slightly (but not significantly) lower compared to experiments with manual query translations. The bilingual English!Bengali (English!Hindi) experiments achieve 81.7%-83.3% (78.0%-80.6%) of the best corresponding monolingual experiments

    Hindi language text search: a literature review

    Get PDF
    The literature review focuses on the major problems of Hindi text searching over the web. The review reveals the availability of a number of techniques and search engines that have been developed to facilitate Hindi text searching. Among many problems, a dominant one is when a text formed by combinatorial characters or words is searched

    Development of Multilingual Resource Management Mechanisms for Libraries

    Get PDF
    Multilingual is one of the important concept in any library. This study is create on the basis of global recommendations and local requirement for each and every libraries. Select the multilingual components for setting up the multilingual cluster in different libraries to each user. Development of multilingual environment for accessing and retrieving the library resources among the users as well as library professionals. Now, the methodology of integration of Google Indic Transliteration for libraries have follow the five steps such as (i) selection of transliteration tools for libraries (ii) comparison of tools for libraries (iii) integration Methods in Koha for libraries (iv) Development of Google indic transliteration in Koha for users (v) testing for libraries (vi) results for libraries. Development of multilingual framework for libraries is also an important task in integrated library system and in this section have follow the some important steps such as (i) Bengali Language Installation in Koha for libraries (ii) Settings Multilingual System Preferences in Koha for libraries (iii) Translate the Modules for libraries (iv) Bengali Interface in Koha for libraries. Apart from these it has also shows the Bengali data entry process in Koha for libraries such as Data Entry through Ibus Avro Phonetics for libraries and Data Entry through Virtual Keyboard for libraries. Development of Multilingual Digital Resource Management for libraries by using the DSpace and Greenstone. Management of multilingual for libraries in different areas such as federated searching (VuFind Multilingual Discovery tool ; Multilingual Retrieval in OAI-PMH tool ; Multilingual Data Import through Z39.50 Server ). Multilingual bibliographic data edit through MarcEditor for the better management of integrated library management system. It has also create and editing the content by using the content management system tool for efficient and effective retrieval of multilingual digital content resources among the users

    Cross-language Information Retrieval

    Full text link
    Two key assumptions shape the usual view of ranked retrieval: (1) that the searcher can choose words for their query that might appear in the documents that they wish to see, and (2) that ranking retrieved documents will suffice because the searcher will be able to recognize those which they wished to find. When the documents to be searched are in a language not known by the searcher, neither assumption is true. In such cases, Cross-Language Information Retrieval (CLIR) is needed. This chapter reviews the state of the art for CLIR and outlines some open research questions.Comment: 49 pages, 0 figure

    Studying the Effect and Treatment of Misspelled Queries in Cross-Language Information Retrieval

    Get PDF
    [Abstract] The performance of Information Retrieval systems is limited by the linguistic variation present in natural language texts. Word-level Natural Language Processing techniques have been shown to be useful in reducing this variation. In this article, we summarize our work on the extension of these techniques for dealing with phrase-level variation in European languages, taking Spanish as a case in point. We propose the use of syntactic dependencies as complex index terms in an attempt to solve the problems deriving from both syntactic and morpho-syntactic variation and, in this way, to obtain more precise index terms. Such dependencies are obtained through a shallow parser based on cascades of finite-state transducers in order to reduce as far as possible the overhead due to this parsing process. The use of different sources of syntactic information, queries or documents, has been also studied, as has the restriction of the dependencies applied to those obtained from noun phrases. Our approaches have been tested using the CLEF corpus, obtaining consistent improvements with regard to classical word-level non-linguistic techniques. Results show, on the one hand, that syntactic information extracted from documents is more useful than that from queries. On the other hand, it has been demonstrated that by restricting dependencies to those corresponding to noun phrases, important reductions of storage and management costs can be achieved, albeit at the expense of a slight reduction in performance.Ministerio de Economía y Competitividad; FFI2014-51978-C2-1-RRede Galega de Procesamento da Linguaxe e Recuperación de Información; CN2014/034Ministerio de Economía y Competitividad; BES-2015-073768Ministerio de Economía y Competitividad; FFI2014-51978-C2-2-

    Classifier combination approach for question classification for Bengali question answering system

    Full text link
    [EN] Question classification (QC) is a prime constituent of an automated question answering system. The work presented here demonstrates that a combination of multiple models achieves better classification performance than those obtained with existing individual models for the QC task in Bengali. We have exploited state-of-the-art multiple model combination techniques, i.e., ensemble, stacking and voting, to increase QC accuracy. Lexical, syntactic and semantic features of Bengali questions are used for four well-known classifiers, namely Naive Bayes, kernel Naive Bayes, Rule Induction and Decision Tree, which serve as our base learners. Single-layer question-class taxonomy with 8 coarse-grained classes is extended to two-layer taxonomy by adding 69 fine-grained classes. We carried out the experiments both on single-layer and two-layer taxonomies. Experimental results confirmed that classifier combination approaches outperform single-classifier classification approaches by 4.02% for coarse-grained question classes. Overall, the stacking approach produces the best results for fine-grained classification and achieves 87.79% of accuracy. The approach presented here could be used in other Indo-Aryan or Indic languages to develop a question answering system.Somnath Banerjee and Sudip Kumar Naskar are supported by Digital India Corporation (formerly Media Lab Asia), MeitY, Government of India, under the Visvesvaraya Ph.D. Scheme for Electronics and IT. The work of Paolo Rosso was partially funded by the Spanish MICINN under the research project PGC2018-096212-B-C31.Banerjee, S.; Kumar Naskar, S.; Rosso, P.; Bndyopadhyay, S. (2019). Classifier combination approach for question classification for Bengali question answering system. Sadhana. 44(12):1-14. https://doi.org/10.1007/s12046-019-1224-81144412Jurafsky D and Martin J H 2014 Speech and language processing. Pearson, LondonMartin J H and Jurafsky D 2000 Speech and language processing, international edition 710Voorhees E M 2002 Overview of the TREC 2001 question answering track. NIST Special Publication, pp. 42–51Hovy E, Gerber L, Hermjakob U, Lin C Y and Ravichandran D 2001 Toward semantics-based answer pinpointing. In: Proceedings of Human Language Technology Research, ACL, pp. 1–7Ittycheriah A, Franz M, Zhu W J, Ratnaparkhi A and Mammone R J 2000 IBM’s statistical question answering system. In: Proceedings of TRECMoldovan D, Paşca M, Harabagiu S and Surdeanu M 2003 Performance issues and error analysis in an open-domain question answering system. ACM Trans. Inf. Syst. 21(2): 133–154Banerjee S and Bandyopadhyay S 2012 Bengali question classification: towards developing QA system. In: Proceedings of the 3rd Workshop on South and Sotheast Asian Language Processing (SANLP), COLING, pp. 25–40Loni B 2011 A survey of state-of-the-art methods on question classification. Technical Report, Delft University of TechnologyHull D A 1999 Xerox TREC-8 question answering track report. In: Proceedings of TRECPrager J, Radev D, Brown E, Coden A and Samn V 1999 The use of predictive annotation for question answering in TREC8. Inf. Retr. 1(3): 4Moschitti A, Quarteroni S, Basili R and Manandhar S 2007 Exploiting syntactic and shallow semantic kernels for question answer classification. In: Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, p. 776Zhang D and Lee W S 2003 Question classification using support vector machines. In: Proceedings of Research and Development in Informaion Retrieval, ACM, pp. 26–32Huang Z, Thint M and Qin Z 2008 Question classification using head words and their hypernyms. In: Proceedings of Empirical Methods in Natural Language Processing, ACL, pp. 927–936Silva J, Coheur L, Mendes A C and Wichert A 2011 From symbolic to sub-symbolic information in question classification. Artif. Intell. Rev. 35(2): 137–154Li X and Roth D 2006 Learning question classifiers: the role of semantic information. Nat. Lang. Eng. 12(03): 229–249McCallum A, Freitag D and Pereira F C N 2000 Maximum entropy markov models for information extraction and segmentation. In: Proceedings of the International Conference on Machine Learning (ICML), vol. 17, pp. 591–598Cortes C and Vapnik V 1995 Support-vector networks. Mach. Learn. 20(3): 273–297Breiman L 1996 Bagging predictors. Mach. Learn. 24(2): 123–140Clemen R T 1989 Combining forecasts: a review and annotated bibliography. Int. J. Forecast. 5(4): 559–583Perrone M P 1993 Improving regression estimation: averaging methods for variance reduction with extensions to general convex measure optimization. Ph.D. Thesis, Brown UniversityWolpert D H 1992 Stacked generalization. Neural Netw. 5(2): 241–259Hansen L K and Salamon P 1990 Neural network ensembles. IEEE Trans. Pattern Anal. Mach. Intell. 12: 993–1001Krogh A, Vedelsby J et al 1995 Neural network ensembles, cross validation, and active learning. Adv. Neural Inf. Process. Syst. 7: 231–238Hashem S 1997 Optimal linear combinations of neural networks. Neural Netw. 10(4): 599–614Opitz D W and Shavlik J W 1996 Actively searching for an effective neural network ensemble. Connect. Sci. 8(3–4): 337–354Opitz D W and Shavlik J W 1996 Generating accurate and diverse members of a neural-network ensemble. In: Advances in neural information processing systems, pp. 535–541Xin L, Huang X J and Wu L 2006 Question classification by ensemble learning. Int. J. Comput. Sci. Netw. Secur. 6(3): 147Schapire R E 1990 The strength of weak learnability. Mach. Learn. 5(2): 197–227Brill E 1995 Transformation-based error-driven learning and natural language processing: a case study in part-of-speech tagging. Comput. Linguist. 21(4): 543–565Jia K, Chen K, Fan X and Zhang Y 2007 Chinese question classification based on ensemble learning. In: Proceedings of ACIS International Conference on Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, SNPD 2007. IEEE, vol. 3, pp. 342–347Su L, Liao H, Yu Z and Zhao Q 2009 Ensemble learning for question classification. In: Proceedings of Intelligent Computing and Intelligent Systems, ICIS. IEEE, pp. 501–505Ferrucci D, Brown E, Chu-Carroll J, Fan J et al 2010 Building Watson: an overview of the DeepQA project. AI Mag. 31(3): 59–79Pérez-Coutiño M A, Montes-y-Gómez M, López-López A and Villaseñor-Pineda L 2005 Experiments for tuning the values of lexical features in question answering for Spanish. In: CLEF Working NotesNeumann G and Sacaleanu B 2003 A cross-language question/answering system for German and English. In: Proceedings of the Workshop of the Cross-Language Evaluation Forum for European Languages, pp. 559–571Blunsom P, Kocik K and Curran J R 2006 Question classification with log-linear models. In: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, pp. 615–616Rosso P, Benajiba Y and Lyhyaoui A 2006 In: Proceedings of the 4th Conference on Scientific Research Outlook and Technology Development in the Arab World, pp. 11–14Abouenour L, Bouzoubaa K and Rosso P 2012 IDRAAQ: new Arabic question answering system based on query expansion and passage retrieval. In: Proceedings of CELCTSakai T, Saito Y, Ichimura Y, Koyama M, Kokubu T and Manabe T 2004 ASKMi: a Japanese question answering system based on semantic role analysis. In: Proceedings of Coupling Approaches, Coupling Media and Coupling Languages for Information Retrieval, pp. 215–231Isozaki H, Sudoh K and Tsukada H 2005 NTT’s Japanese–English cross-language question answering system. In: Proceedings of NTCIRYongkui Z, Zheqian Z, Lijun B and Xinqing C 2003 Internet-based Chinese question-answering system. Comput. Eng. 15: 34Sun A, Jiang M, He Y, Chen L and Yuan B 2008 Chinese question answering based on syntax analysis and answer classification. Acta Electron. Sin. 36(5): 833–839Sahu S, Vasnik N and Roy D 2012 Prashnottar: a Hindi question answering system. Int. J. Comput. Sci. Inf. Technol. 4(2): 149Nanda G, Dua M and Singla K 2016 A Hindi question answering system using machine learning approach. In: Proceedings of Computational Techniques in Information and Communication Technologies (ICCTICT). IEEE, pp. 311–314Sekine S and Grishman R 2003 Hindi–English cross-lingual question-answering system. ACM Trans. Asian Lang. Inf. Process. 2(3): 181–192Shukla P, Mukherjee A and Raina A 2004 Towards a language independent encoding of documents. In: Proceedings of NLUCS 2004, p. 116Ray S K, Ahmad A and Shaalan K 2018 A review of the state of the art in Hindi question answering systems. In: Proceedings of Intelligent Natural Language Processing: Trends and Applications, pp. 265–292Kumar P, Kashyap S, Mittal A and Gupta S 2003 A query answering system for e-learning Hindi documents. South Asian Lang. Rev. 13(1–2): 69–81Reddy R, Reddy N and Bandyopadhyay S 2006 Dialogue based question answering system in Telugu. In: Proceedings of the Workshop on Multilingual Question Answering, pp. 53–60Dhanjal G S, Sharma S and Sarao P K 2016 Gravity based Punjabi question answering system. Int. J. Comput. Appl. 147(3): 30–35Bindu M S and Mary I S 2012 Design and development of a named entity based question answering system for Malayalam language. Ph.D. Thesis, Cochin University of Science and TechnologyLee C W et al 2005 ASQA: academia sinica question answering system for NTCIR-5 CLQA. In: Proceedings of the NTCIR-5 Workshop, pp. 202–208Banerjee S and Bandyopadhyay S 2013 Ensemble approach for fine-grained question classification in Bengali. In: Proceedings of the 27th Pacific–Asia Conference on Language, Information, and Computation (PACLIC-27), pp. 75–84Loni B, Van Tulder G, Wiggers P, Tax D M J and Loog M 2011 Question classification by weighted combination of lexical, syntactic and semantic features. In: Proceedings of the International Conference on Text, Speech, and Dialogue, pp. 243–250Huang Z, Thint M and Celikyilmaz A 2009 Investigation of question classifier in question answering. In: Proceedings of Empirical Methods in Natural Language Processing. ACL, vol. 2, pp. 543–550Blunsom P, Kocik K and Curran J R 2006 Question classification with log-linear models. In: Proceedings of ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, pp. 615–616Diwakar S, Goyal P and Gupta R 2010 Transliteration among indian languages using WX notation. In: Proceedings of the Conference on Natural Language Processing, EPFL-CONF-168805. Saarland University Press, pp. 147–150Banerjee S, Naskar S K and Bandyopadhyay S Bengali named entity recognition using margin infused relaxed algorithm. In: Proceedings of the International Conference on Text, Speech, and Dialogue, pp. 125–132Li X and Roth D Learning question classifiers. In: Proceedings of the 19th International Conference on Computational Linguistics, ACL, vol. 1, pp. 1–7Cohen J 1960 A coefficient of agreement for nominal scales. Educ. Psychol. Meas. 20(1): 37–46Schapire R E 1990 The strength of weak learnability. Mach. Learn. 5(2): 197–22
    corecore