838 research outputs found

    Extraction of Dynamic Trajectory on Multi-Stroke Static Handwriting Images Using Loop Analysis and Skeletal Graph Model

    Get PDF
    The recovery of handwriting’s dynamic stroke is an effective method to help improve the accuracy of any handwriting’s authentication or verification system. The recovered trajectory can be considered as a dynamic feature of any static handwritten images. Capitalising on this temporal information can significantly increase the accuracy of the verification phase. Extraction of dynamic features from static handwritings remains a challenge due to the lack of temporal information as compared to the online methods. Previously, there are two typical approaches to recover the handwriting’s stroke. The first approach is based on the script’s skeleton. The skeletonisation method has highly computational efficiency whereas it often produces noisy artifacts and mismatches on the resulted skeleton. The second approach deals with the handwriting’s contour, crossing areas and overlaps using parametric representations of lines and thickness of strokes. This method can avoid the artifacts, but it requires complicated mathematical models and may lead to computational explosion. Our paper is based on the script’s extracted skeleton and provides an approach to processing static handwriting’s objects, including edges, vertices and loops, as the important aspects of any handwritten image. Our paper is also to provide analysing and classifying loops types and human’s natural writing behavior to improve the global construction of stroke order. Then, a detailed tracing algorithm on global stroke reconstruction is presented. The experimental results reveal the superiority of our method as compared with the existing ones

    Advances in Image Processing, Analysis and Recognition Technology

    Get PDF
    For many decades, researchers have been trying to make computers’ analysis of images as effective as the system of human vision is. For this purpose, many algorithms and systems have previously been created. The whole process covers various stages, including image processing, representation and recognition. The results of this work can be applied to many computer-assisted areas of everyday life. They improve particular activities and provide handy tools, which are sometimes only for entertainment, but quite often, they significantly increase our safety. In fact, the practical implementation of image processing algorithms is particularly wide. Moreover, the rapid growth of computational complexity and computer efficiency has allowed for the development of more sophisticated and effective algorithms and tools. Although significant progress has been made so far, many issues still remain, resulting in the need for the development of novel approaches

    Advanced document data extraction techniques to improve supply chain performance

    Get PDF
    In this thesis, a novel machine learning technique to extract text-based information from scanned images has been developed. This information extraction is performed in the context of scanned invoices and bills used in financial transactions. These financial transactions contain a considerable amount of data that must be extracted, refined, and stored digitally before it can be used for analysis. Converting this data into a digital format is often a time-consuming process. Automation and data optimisation show promise as methods for reducing the time required and the cost of Supply Chain Management (SCM) processes, especially Supplier Invoice Management (SIM), Financial Supply Chain Management (FSCM) and Supply Chain procurement processes. This thesis uses a cross-disciplinary approach involving Computer Science and Operational Management to explore the benefit of automated invoice data extraction in business and its impact on SCM. The study adopts a multimethod approach based on empirical research, surveys, and interviews performed on selected companies.The expert system developed in this thesis focuses on two distinct areas of research: Text/Object Detection and Text Extraction. For Text/Object Detection, the Faster R-CNN model was analysed. While this model yields outstanding results in terms of object detection, it is limited by poor performance when image quality is low. The Generative Adversarial Network (GAN) model is proposed in response to this limitation. The GAN model is a generator network that is implemented with the help of the Faster R-CNN model and a discriminator that relies on PatchGAN. The output of the GAN model is text data with bonding boxes. For text extraction from the bounding box, a novel data extraction framework consisting of various processes including XML processing in case of existing OCR engine, bounding box pre-processing, text clean up, OCR error correction, spell check, type check, pattern-based matching, and finally, a learning mechanism for automatizing future data extraction was designed. Whichever fields the system can extract successfully are provided in key-value format.The efficiency of the proposed system was validated using existing datasets such as SROIE and VATI. Real-time data was validated using invoices that were collected by two companies that provide invoice automation services in various countries. Currently, these scanned invoices are sent to an OCR system such as OmniPage, Tesseract, or ABBYY FRE to extract text blocks and later, a rule-based engine is used to extract relevant data. While the system’s methodology is robust, the companies surveyed were not satisfied with its accuracy. Thus, they sought out new, optimized solutions. To confirm the results, the engines were used to return XML-based files with text and metadata identified. The output XML data was then fed into this new system for information extraction. This system uses the existing OCR engine and a novel, self-adaptive, learning-based OCR engine. This new engine is based on the GAN model for better text identification. Experiments were conducted on various invoice formats to further test and refine its extraction capabilities. For cost optimisation and the analysis of spend classification, additional data were provided by another company in London that holds expertise in reducing their clients' procurement costs. This data was fed into our system to get a deeper level of spend classification and categorisation. This helped the company to reduce its reliance on human effort and allowed for greater efficiency in comparison with the process of performing similar tasks manually using excel sheets and Business Intelligence (BI) tools.The intention behind the development of this novel methodology was twofold. First, to test and develop a novel solution that does not depend on any specific OCR technology. Second, to increase the information extraction accuracy factor over that of existing methodologies. Finally, it evaluates the real-world need for the system and the impact it would have on SCM. This newly developed method is generic and can extract text from any given invoice, making it a valuable tool for optimizing SCM. In addition, the system uses a template-matching approach to ensure the quality of the extracted information

    A think-aloud protocols investigation of lexico-semantic problems and problem-solving strategies among trainee English-Arabic translators.

    Get PDF
    SIGLEAvailable from British Library Document Supply Centre-DSC:DXN039294 / BLDSC - British Library Document Supply CentreGBUnited Kingdo

    Natural Language Processing in Electronic Health Records in Relation to Healthcare Decision-making: A Systematic Review

    Full text link
    Background: Natural Language Processing (NLP) is widely used to extract clinical insights from Electronic Health Records (EHRs). However, the lack of annotated data, automated tools, and other challenges hinder the full utilisation of NLP for EHRs. Various Machine Learning (ML), Deep Learning (DL) and NLP techniques are studied and compared to understand the limitations and opportunities in this space comprehensively. Methodology: After screening 261 articles from 11 databases, we included 127 papers for full-text review covering seven categories of articles: 1) medical note classification, 2) clinical entity recognition, 3) text summarisation, 4) deep learning (DL) and transfer learning architecture, 5) information extraction, 6) Medical language translation and 7) other NLP applications. This study follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Result and Discussion: EHR was the most commonly used data type among the selected articles, and the datasets were primarily unstructured. Various ML and DL methods were used, with prediction or classification being the most common application of ML or DL. The most common use cases were: the International Classification of Diseases, Ninth Revision (ICD-9) classification, clinical note analysis, and named entity recognition (NER) for clinical descriptions and research on psychiatric disorders. Conclusion: We find that the adopted ML models were not adequately assessed. In addition, the data imbalance problem is quite important, yet we must find techniques to address this underlining problem. Future studies should address key limitations in studies, primarily identifying Lupus Nephritis, Suicide Attempts, perinatal self-harmed and ICD-9 classification

    A translation robot for each translator? : a comparative study of manual translation and post-editing of machine translations: process, quality and translator attitude

    Get PDF
    To keep up with the growing need for translation in today's globalised society, post-editing of machine translation is increasingly being used as an alternative to regular human translation. While presumably faster than human translation, it is still unsure whether the quality of a post-edited text is comparable to the quality of a human translation, especially for general text types. In addition, there is a lack of understanding of the post-editing process, the effort involved, and the attitude of translators towards it. This dissertation contains a comparative analysis of post-editing and human translation by students and professional translators for general text types from English into Dutch. We study process, product, and translators' attitude in detail. We first conducted two pretests with student translators to try possible experimental setups and to develop a translation quality assessment approach suitable for a fine-grained comparative analysis of machine-translated texts, post-edited texts, and human translations. For the main experiment, we examined students and professional translators, using a combination of keystroke logging tools, eye tracking, and surveys. We used both qualitative analyses and advanced statistical analyses (mixed effects models), allowing for a multifaceted analysis. For the process analysis, we looked at translation speed, cognitive processing by means of eye fixations, the usage of external resources and its impact on overall time. For the product analysis, we looked at overall quality, frequent error types, and the impact of using external resources on quality. The attitude analysis contained questions about perceived usefulness, perceived speed, perceived quality of machine translation and post-editing, and the translation method that was perceived as least tiring. One survey was conducted before the experiment, the other after, so we could detect changes in attitude after participation. In two more detailed analyses, we studied the impact of machine translation quality on various types of post-editing effort indicators, and on the post-editing of multi-word units. We found that post-editing is faster than human translation, and that both translation methods lead to products of comparable overall quality. The more detailed error analysis showed that post-editing leads to somewhat better results regarding adequacy, and human translation leads to better results regarding acceptability. The most common errors for both translation methods are meaning shifts, logical problems, and wrong collocations. Fixation data indicated that post-editing was cognitively less demanding than human translation, and that more attention was devoted to the target text than to the source text. We found that fewer resources are consulted during post-editing than during human translation, although the overall time spent in external resources was comparable. The most frequently used external resources were Google Search, concordancers, and dictionaries. Spending more time in external resources, however, did not lead to an increase in quality. Translators indicated that they found machine translation useful, but they preferred human translation and found it more rewarding. Perceptions about speed and quality were mixed. Most participants believed post-editing to be at least as fast and as good as human translation, but barely ever better. We further discovered that different types of post-editing effort indicators were impacted by different types of machine translation errors, with coherence issues, meaning shifts, and grammatical and structural issues having the greatest effect. HTER, though commonly used, does not correlate well with more process-oriented post-editing effort indicators. Regarding the post-editing of multi-word units, we suggest 'contrast with the target language' as a useful new way of classifying multi-word units, as contrastive multi-word units were much harder to post-edit. In addition, we noticed that research strategies for post-editing multi-word units lack efficiency. Consulting external resources did lead to an increased quality of post-edited multi-word units, but a lot of time was spent in external resources when this was not necessary. Interestingly, the differences between human translation and post-editing usually outweighed the differences between students and professionals. Students did cognitively process texts differently, having longer fixation durations on the source text during human translation, and more fixations on the target text during post-editing, whereas professional translators' fixation behaviour remained constant. For the usage of external resources, only the time spent in dictionaries was higher for students than for professional translators, the usage of other resources was comparable. Overall quality was comparable for students and professionals, but professionals made fewer adequacy errors. Deletions were more noticeable for students than for professional translators in both methods of translation, and word sense issues were more noticeable for professional translators than for students when translating from scratch. Surprisingly, professional translators were often more positive about post-editing than students, believing they could produce products of comparable quality with both methods of translation. Students in particular struggled with the cognitive processing of meaning shifts, and they spent more time in pauses than professional translators. Some of the key contributions of this dissertation to the field of translation studies are the fact that we compared students and professional translators, developed a fine-grained translation quality assessment approach, and used a combination of state-of-the-art logging tools and advanced statistical methods. The effects of experience in our study were limited, and we suggest looking at specialisation and translator confidence in future work. Our guidelines for translation quality assessment can be found in the appendix, and contain practical instructions for use with brat, an open-source annotation tool. The experiment described in this dissertation is also the first to integrate Inputlog and CASMACAT, making it possible to include information on external resources in the CASMACAT logging files, which can be added to the CRITT Translation Process Research Database. Moving beyond the methodological contributions, our findings can be integrated in translation teaching, machine translation system development, and translation tool development. Translators need hands-on post-editing experience to get acquainted with common machine translation errors, and students in particular need to be taught successful strategies to spot and solve adequacy issues. Post-editors would greatly benefit from machine translation systems that made fewer coherence errors, meaning shift errors, and grammatical and structural errors. If visual clues are included in a translation tool (e.g., potentially problematic passages or polysemous words), these should be added to the target text. Tools could further benefit from integration with commonly used external resources, such as dictionaries. In the future, we wish to study the translation and post-editing process in even more detail, taking pause behaviour and regressions into account, as well as look at the passages participants perceived as the most difficult to translate and post-edit. We further wish to gain an even better understanding of the usage of external resources, by looking at the types of queries and by linking queries back to source and target text words. While our findings are limited to the post-editing and human translation of general text types from English into Dutch, we believe our methodology can be applied to different settings, with different language pairs. It is only by studying both processes in many different situations and by comparing findings that we will be able to develop tools and create courses that better suit translators' needs. This, in turn, will make for better, and happier, future generations of translators

    Doctor of Philosophy

    Get PDF
    dissertationThis dissertation reports on practitioner inquiry into a year-long curriculum in middle school Spanish II built around Latina/o cultural topics with social justice implications. A comprehensible input approach supported both the development of a core vocabulary in Spanish and discussion of these issues. Translanguaging-accessing linguistic resources across supposedly separate languages-allowed teacher and students to heighten complexity and comprehensibility. The study used a critical sociocultural theory lens and mixed methods to perform case studies of two classes with a focus on three students of varying positionalities. I asked (1) how the two classes interpreted and applied the central social justice concept used in the course (bilocal culture-crossing, BCC ), (2) how much their Spanish improved, (3) how motivating and satisfying they found the experience, and (4) how case study students' positionalities shaped their experiences of the curriculum. The first and last research questions yielded six findings: First, students joined me in translanguaging in order to make themselves understood around the complexity of power relations. Second, students actively used discourses of taking others' perspectives. Third, students increasingly recognized and spoke back to issues of unfairness. Fourth, students also began to use the term BCC as a way to discipline and admonish others. Fifth, student resistance centered on the term I had coined and, in one class more than the other, on perceived one-sidedness in some of the videos we discussed. Sixth, students participated in our BCC discourse community through processes of self-recognition work. For case study students this meant using their positionalities as lenses with which to understand BCC. The one case study student of color appropriated BCC in a way that constituted Walter Mignolo's concept of border thinking. In response to the other research questions, students' Spanish proficiency grew sufficiently to meet these state and district objectives, with Fourth Period showing more growth and Sixth Period attaining higher proficiency; and Fourth Period was less satisfied by the experience, citing issues with relevance, comprehensibility, and clarity of language goals. Implications for theory, practice and future research are discussed, including the significance of the study for Latinas/os in world language classrooms

    Emerging technologies for learning report (volume 3)

    Get PDF
    • …
    corecore