5 research outputs found

    Extended Parallel Corpus for Amharic-English Machine Translation

    Full text link
    This paper describes the acquisition, preprocessing, segmentation, and alignment of an Amharic-English parallel corpus. It will be useful for machine translation of an under-resourced language, Amharic. The corpus is larger than previously compiled corpora; it is released for research purposes. We trained neural machine translation and phrase-based statistical machine translation models using the corpus. In the automatic evaluation, neural machine translation models outperform phrase-based statistical machine translation models.Comment: Accepted to 2nd AfricanNLP workshop at EACL 202

    An efficient approach for Interactive Sequential Pattern Recognition

    Get PDF
    Interactive Pattern Recognition (IPR) is an emergent framework in which the user is involved actively in the recognition process by giving feedback to the system when an error is detected. Although this framework is expected to reduce the number of errors to correct, it may increase the time required to complete the task since the machine needs to recompute its proposal after each interaction. Therefore, a fast computation is required to make the interactive system profitable and user-friendly. This work presents an efficient approach to deal with IPR tasks when data has a sequential nature. Our approach includes some computation at the very beginning of the task but it then achieves a linear complexity after user corrections. We also show how these tasks can be effectively carried out if the solution space is defined with a Regular Language. This fact has indeed proven to be the most relevant factor to improve the efficiency of the approach. Several experiments are carried out in which our proposal is faced against a classical search. Results show a reduction in time in all experiments considered, solving efficiently some complex IPR tasks thanks to our proposals.This work was partially supported by the Spanish Ministerio de Educación, Cultura y Deporte through FPU fellowship (AP2012-0939) and the Spanish Ministerio de Economía y Competitividad through Project TIMuL (No. TIN2013-48152-C2-1-R, supported by UE FEDER funds)

    Detail and contrast enhancement in images using dithering and fusion

    Get PDF
    This thesis focuses on two applications of wavelet transforms to achieve image enhancement. One of the applications is image fusion and the other one is image dithering. Firstly, to improve the quality of a fused image, an image fusion technique based on transform domain has been proposed as a part of this research. The proposed fusion technique has also been extended to reduce temporal redundancy associated with the processing. Experimental results show better performance of the proposed methods over other methods. In addition, achievements have been made in terms of enhancing image contrast, capturing more image details and efficiency in processing time when compared to existing methods. Secondly, of all the present image dithering methods, error diffusion-based dithering is the most widely used and explored. Error diffusion, despite its great success, has been lacking in image enhancement aspects because of the softening effects caused by this method. To compensate for the softening effects, wavelet-based dithering was introduced. Although wavelet-based dithering worked well in removing the softening effects, as the method is based on discrete wavelet transform, it lacked in aspects like poor directionality and shift invariance, which are responsible for making the resultant images look sharp and crisp. Hence, a new method named complex wavelet-based dithering has been introduced as part of this research to compensate for the softening effects. Image processed by the proposed method emphasises more on details and exhibits better contrast characteristics in comparison to the existing methods

    A large vocabulary online handwriting recognition system for Turkish

    Get PDF
    Handwriting recognition in general and online handwriting recognition in particular has been an active research area for several decades. Most of the research have been focused on English and recently on other scripts like Arabic and Chinese. There is a lack of research on recognition in Turkish text and this work primarily fills that gap with a state-of-the-art recognizer for the first time. It contains design and implementation details of a complete recognition system for recognition of Turkish isolated words. Based on the Hidden Markov Models, the system comprises pre-processing, feature extraction, optical modeling and language modeling modules. It considers the recognition of unconstrained handwriting with a limited vocabulary size first and then evolves to a large vocabulary system. Turkish script has many similarities with other Latin scripts, like English, which makes it possible to adapt strategies that work for them. However, there are some other issues which are particular to Turkish that should be taken into consideration separately. Two of the challenging issues in recognition of Turkish text are determined as delayed strokes which introduce an extra source of variation in the sequence order of the handwritten input and high Out-of-Vocabulary (OOV) rate of Turkish when words are used as vocabulary units in the decoding process. This work examines the problems and alternative solutions at depth and proposes suitable solutions for Turkish script particularly. In delayed stroke handling, first a clear definition of the delayed strokes is developed and then using that definition some alternative handling methods are evaluated extensively on the UNIPEN and Turkish datasets. The best results are obtained by removing all delayed strokes, with up to 2.13% and 2.03% points recognition accuracy increases, over the respective baselines of English and Turkish. The overall system performances are assessed as 86.1% with a 1,000-word lexicon and 83.0% with a 3,500-word lexicon on the UNIPEN dataset and 91.7% on the Turkish dataset. Alternative decoding vocabularies are designed with grammatical sub-lexical units in order to solve the problem of high OOV rate. Additionally, statistical bi-gram and tri-gram language models are applied during the decoding process. The best performance, 67.9% is obtained by the large stem-ending vocabulary that is expanded with a bi-gram model on the Turkish dataset. This result is superior to the accuracy of the word-based vocabulary (63.8%) with the same coverage of 95% on the BOUN Web Corpus
    corecore