23 research outputs found

    Natural language processing

    Get PDF
    Beginning with the basic issues of NLP, this chapter aims to chart the major research activities in this area since the last ARIST Chapter in 1996 (Haas, 1996), including: (i) natural language text processing systems - text summarization, information extraction, information retrieval, etc., including domain-specific applications; (ii) natural language interfaces; (iii) NLP in the context of www and digital libraries ; and (iv) evaluation of NLP systems

    Einsatz neuronaler Netze als Transferkomponenten beim Retrieval in heterogenen DokumentbestÀnden

    Full text link
    "Die zunehmende weltweite Vernetzung und der Aufbau von digitalen Bibliotheken fĂŒhrt zu neuen Möglichkeiten bei der Suche in mehreren DatenbestĂ€nden. Dabei entsteht das Problem der semantischen HeterogenitĂ€t, da z.B. Begriffe in verschiedenen Kontexten verschiedene Bedeutung haben können. Die dafĂŒr notwendigen Transferkomponenten bilden eine neue Herausforderung, fĂŒr die neuronale Netze gut geeignet sind." (Autorenreferat

    Automatic Segmentation and Part-Of-Speech Tagging For Tibetan: A First Step Towards Machine Translation

    Get PDF
    This paper presents what we believe to be the first reported work on Tibetan machine translation (MT). Of the three conceptually distinct components of a MT system — analysis, transfer, and generation — the first phase, consisting of POS tagging has been successfully completed. The combination POS tagger / word-segmenter was manually constructed as a rule-based multi-tagger relying on the Wilson formulation of Tibetan grammar. Partial parsing was also performed in combination with POS-tag sequence disambiguation. The component was evaluated at the task of document indexing for Information Retrieval (IR). Preliminary analysis indicated slightly better (though statistically comparable) performance to n-gram based approaches at a known-item IR task. Although segmentation is application specific, error analysis placed segmentation accuracy at 99%; the accuracy of the POS tagger is also estimated at 99% based on IR error analysis and random sampling

    Applying digital content management to support localisation

    Get PDF
    The retrieval and presentation of digital content such as that on the World Wide Web (WWW) is a substantial area of research. While recent years have seen huge expansion in the size of web-based archives that can be searched efficiently by commercial search engines, the presentation of potentially relevant content is still limited to ranked document lists represented by simple text snippets or image keyframe surrogates. There is expanding interest in techniques to personalise the presentation of content to improve the richness and effectiveness of the user experience. One of the most significant challenges to achieving this is the increasingly multilingual nature of this data, and the need to provide suitably localised responses to users based on this content. The Digital Content Management (DCM) track of the Centre for Next Generation Localisation (CNGL) is seeking to develop technologies to support advanced personalised access and presentation of information by combining elements from the existing research areas of Adaptive Hypermedia and Information Retrieval. The combination of these technologies is intended to produce significant improvements in the way users access information. We review key features of these technologies and introduce early ideas for how these technologies can support localisation and localised content before concluding with some impressions of future directions in DCM

    On-line learning for adaptive text filtering.

    Get PDF
    Yu Kwok Leung.Thesis (M.Phil.)--Chinese University of Hong Kong, 1999.Includes bibliographical references (leaves 91-96).Abstracts in English and Chinese.Chapter 1 --- Introduction --- p.1Chapter 1.1 --- The Problem --- p.1Chapter 1.2 --- Information Filtering --- p.2Chapter 1.3 --- Contributions --- p.7Chapter 1.4 --- Organization Of The Thesis --- p.10Chapter 2 --- Related Work --- p.12Chapter 3 --- Adaptive Text Filtering --- p.22Chapter 3.1 --- Representation --- p.22Chapter 3.1.1 --- Textual Document --- p.23Chapter 3.1.2 --- Filtering Profile --- p.28Chapter 3.2 --- On-line Learning Algorithms For Adaptive Text Filtering --- p.29Chapter 3.2.1 --- The Sleeping Experts Algorithm --- p.29Chapter 3.2.2 --- The EG-based Algorithms --- p.32Chapter 4 --- The REPGER Algorithm --- p.37Chapter 4.1 --- A New Approach --- p.37Chapter 4.2 --- Relevance Prediction By RElevant feature Pool --- p.42Chapter 4.3 --- Retrieving Good Training Examples --- p.45Chapter 4.4 --- Learning Dissemination Threshold Dynamically --- p.49Chapter 5 --- The Threshold Learning Algorithm --- p.50Chapter 5.1 --- Learning Dissemination Threshold Dynamically --- p.50Chapter 5.2 --- Existing Threshold Learning Techniques --- p.51Chapter 5.3 --- A New Threshold Learning Algorithm --- p.53Chapter 6 --- Empirical Evaluations --- p.55Chapter 6.1 --- Experimental Methodology --- p.55Chapter 6.2 --- Experimental Settings --- p.59Chapter 6.3 --- Experimental Results --- p.62Chapter 7 --- Integrating With Feature Clustering --- p.76Chapter 7.1 --- Distributional Clustering Algorithm --- p.79Chapter 7.2 --- Integrating With Our REPGER Algorithm --- p.82Chapter 7.3 --- Empirical Evaluation --- p.84Chapter 8 --- Conclusions --- p.87Chapter 8.1 --- Summary --- p.87Chapter 8.2 --- Future Work --- p.88Bibliography --- p.91Chapter A --- Experimental Results On The AP Corpus --- p.97Chapter A.1 --- The EG Algorithm --- p.97Chapter A.2 --- The EG-C Algorithm --- p.98Chapter A.3 --- The REPGER Algorithm --- p.100Chapter B --- Experimental Results On The FBIS Corpus --- p.102Chapter B.1 --- The EG Algorithm --- p.102Chapter B.2 --- The EG-C Algorithm --- p.103Chapter B.3 --- The REPGER Algorithm --- p.105Chapter C --- Experimental Results On The WSJ Corpus --- p.107Chapter C.1 --- The EG Algorithm --- p.107Chapter C.2 --- The EG-C Algorithm --- p.108Chapter C.3 --- The REPGER Algorithm --- p.11

    Improved cross-language information retrieval via disambiguation and vocabulary discovery

    Get PDF
    Cross-lingual information retrieval (CLIR) allows people to find documents irrespective of the language used in the query or document. This thesis is concerned with the development of techniques to improve the effectiveness of Chinese-English CLIR. In Chinese-English CLIR, the accuracy of dictionary-based query translation is limited by two major factors: translation ambiguity and the presence of out-of-vocabulary (OOV) terms. We explore alternative methods for translation disambiguation, and demonstrate new techniques based on a Markov model and the use of web documents as a corpus to provide context for disambiguation. This simple disambiguation technique has proved to be extremely robust and successful. Queries that seek topical information typically contain OOV terms that may not be found in a translation dictionary, leading to inappropriate translations and consequent poor retrieval performance. Our novel OOV term translation method is based on the Chinese authorial practice of including unfamiliar English terms in both languages. It automatically extracts correct translations from the web and can be applied to both Chinese-English and English-Chinese CLIR. Our OOV translation technique does not rely on prior segmentation and is thus free from seg mentation error. It leads to a significant improvement in CLIR effectiveness and can also be used to improve Chinese segmentation accuracy. Good quality translation resources, especially bilingual dictionaries, are valuable resources for effective CLIR. We developed a system to facilitate construction of a large-scale translation lexicon of Chinese-English OOV terms using the web. Experimental results show that this method is reliable and of practical use in query translation. In addition, parallel corpora provide a rich source of translation information. We have also developed a system that uses multiple features to identify parallel texts via a k-nearest-neighbour classifier, to automatically collect high quality parallel Chinese-English corpora from the web. These two automatic web mining systems are highly reliable and easy to deploy. In this research, we provided new ways to acquire linguistic resources using multilingual content on the web. These linguistic resources not only improve the efficiency and effectiveness of Chinese-English cross-language web retrieval; but also have wider applications than CLIR

    Combining Multiple Strategies for Effective Monolingual and Cross-Language Retrieval

    Full text link
    corecore