15 research outputs found
Metrics for Complete Evaluation of OCR Performance
International audienceIn this paper, we study metrics for evaluating OCR performance both in terms of physical segmentation and in terms of textual content recognition. These metrics rely on the OCR output (hypothesis) and the reference (also called ground truth) input format. Two evaluation criteria are considered: the quality of segmentation and the character recognition rate. Three pairs of input formats are selected among two types of inputs: text only (text) and text with spatial information (xml). These pairs of inputs reference-to-hypothesis are: 1) text-to-text, 2) xml-to-xml and 3) text-to-xml. For the text-to-text pair, we selected the RETAS method to perform experiments and show its limits. Regarding text-to-xml, a new method based on unique word anchors is proposed to solve the problem of aligning texts with different information. We define the ZoneMapAltCnt metric for the xml-to-xml approach and show that it offers the most reliable and complete evaluation compared to the other two. Open source OCRs like Tesseract and OCRopus are selected to perform experiments. The datasets used are a collection of documents from the ISTEX 1 document database, from French newspaper "Le Nouvel Observateur" as well as invoices and administrative document gathered from different collaborations
Optical Character Recognition Using Morphological Attributes.
This dissertation addresses a fundamental computational strategy in image processing hand written English characters using traditional parallel computers. Image acquisition and processing is becoming a thriving industry because of the frequent availability of fax machines, video digitizers, flat-bed scanners, hand scanners, color scanners, and other image input devices that are now accessible to everyone. Optical Character Recognition (OCR) research increased as the technology for a robust OCR system became realistic. There is no commercial effective recognition system that is able to translate raw digital images of hand written text into pure ASCII. The reason is that a digital image comprises of a vast number of pixels. The traditional approach of processing the huge collection of pixel information is quite slow and cumbersome. In this dissertation we developed an approach and theory for a fast robust OCR system for images of hand written characters using morphological attribute features that are expected by the alphabet character set. By extracting specific morphological attributes from the scanned image, the dynamic OCR system is able to generalize and approximate similar images. This generalization is achieved with the usage of fuzzy logic and neural network. Since the main requirement for a commercially effective OCR is a fast and a high recognition rate system, the approach taken in this research is to shift the recognition computation into the system\u27s architecture and its learning phase. The recognition process constituted mainly simple integer computation, a preferred computation on digital computers. In essence, the system maintains the attribute envelope boundary upon which each English character could fall under. This boundary is based on extreme attributes extracted from images introduced to the system beforehand. The theory was implemented both on a SIMD-MC\sp2 and a SISD machine. The resultant system proved to be a fast robust dynamic system, given that a suitable learning had taken place. The principle contributions of this dissertation are: (1) Improving existing thinning algorithms for image preprocessing. (2) Development of an on-line cluster partitioning procedure for region oriented segmentation. (3) Expansion of a fuzzy knowledge base theory to maintain morphological attributes on digital computers. (4) Dynamic Fuzzy learning/recognition technique
Recommended from our members
Making digital history: The impact of digitality on public participation and scholarly practices in historical research
This thesis investigates tow key questions: firstly, how do two broad groups - academic, family and local historians, and the public - evaluate, use, and contribute to digital history resources? And consequently, what impact have digital technologies had on public participation and scholarly practices in historical research?
Analysing the impact of design on participant experiences and the reception of digital historiography by demonstrating the value of methods drawn from human-computer interaction, including heuristic evaluation, trace ethnography and semi-structured interviews. This thesis also investigates the relationship between heritage crowdsourcing projects (which ask the public to help with meaningful, inherently rewarding tasks that contribute to a shared, significant goal or research interest related to cultural heritage collections or knowledge) and the development of historical skills and interests. It situates crowdsourcing and citizen history within the broader field of participatory digital history and then focuses on the impact of digitality on the research practices of faculty and community historians.
Chapter 1 provides an overview of over 400 digital history projects aimed at engaging the public or collecting, creating or enhancing records about historical materials for scholarly and general audiences. Chapter 2 discusses design factors that may influence the success of crowdsourcing projects. Following this, Chapter 3 explores the ways in which some crowdsourcing projects encourage deeper engagement with history or science, and the role of communities of practice in citizen history. Chapter 4 shifts our focus from public participation to scholarly practices in historical research, presenting the results of interviews conducted with 29 faculty and community historians. Finally, the Conclusion draws together the threads that link public participation and scholarly practices, teasing out the ways in which the practices of discovering, gathering, creating and sharing historical materials and knowledge have been affected by digital methods, tools and resources
Crowdsourcing in cultural heritage
The aims of this study, within the framework of the Europeana Common Culture project are to: 1. Determine current and planned approaches and practices within the Europeana aggregation ecosystem in relation to crowdsourced metadata and content. 2. Investigate, as comprehensively as possible, past and existing DCH crowdsourcing initiatives across Europe, systematically describing their status and gaining a sound understanding of current practices. 3. Assess the feasibility, desirability and challenges faced in any effort to strengthen the pipeline from such initiatives to enable ingestion of their metadata or access to their content through Europeana. 4. Provide recommendations and guidelines for consideration by Europeana, aggregators and Cultural Heritage Institutions. 5. Support the creation of training materials for the Europeana ecosystem in terms of any agreed interaction with Europeana around crowdsourced assets and deliver this by suitable means (e.g. webinars, Europeana Pro). The work carried out has involved a 9 month programme (April-December 2020) consisting of desk research, , three online questionnaire surveys (to national aggregators; thematic/domain aggregators and external crowdsourcing initiatives respectively), a series of interviews and three consultative on-line events. The survey data are summarised in extensive annexes
Recommended from our members
Neural ProbabilisticModels for Melody Prediction, Sequence Labelling and Classification
Data-driven sequence models have long played a role in the analysis and generation of musical information. Such models are of interest in computational musicology, computer-aided music composition, and tools for music education among other applications. This dissertation beginswith an experiment tomodel sequences of musical pitch in melodies with a class of purely data-driven predictive models collectively known as Connectionist models. It was demonstrated that a set of six such models could performon par with, or better than state-of-the-art n-gram models previously evaluated in an identical setting. A new model known as the Recurrent
Temporal Discriminative Restricted Boltzmann Machine (RTDRBM), was introduced in the process and found to outperform the rest of the models. A generalisation
of this modelling task was also explored, and involved extending the set of musical features used as input by the models while still predicting pitch as before. The improvement in predictive performance which resulted from adding these new input features is encouraging for future work in this direction.
Based on the above success of the RTDRBM, its application was extended to a non-musical sequence labelling task, namely Optical Character Recognition. This extension involved a modification to the model’s original prediction algorithm as a result of relaxing an assumption specific to the melody modelling task. The generalised model was evaluated on a benchmark dataset and compared against a set of 8 baseline models where it faired better than all of them. Furthermore, a theoretical extension to an existingmodel which was also employed in the above pitch prediction task - the Discriminative Restricted Boltzmann Machine (DRBM) - was
proposed. This led to three new variants of the DRBM (which originally contained Logistic Sigmoid hidden layer activations), withHyperbolic Tangent, Binomial and
Rectified Linear hidden layer activations respectively. The first two of these have been evaluated here on the benchmark MNIST dataset and shown to perform on par with the original DRBM
Policy-Gradient Algorithms for Partially Observable Markov Decision Processes
Partially observable Markov decision processes are interesting because of their ability to model most conceivable real-world learning problems, for example, robot navigation, driving a car, speech recognition, stock trading, and playing games. The downside of this generality is that exact algorithms are computationally intractable. Such computational complexity motivates approximate approaches. One such class of algorithms are the so-called policy-gradient methods from reinforcement learning. They seek to adjust the parameters of an agent in the direction that maximises the long-term average of a reward signal. Policy-gradient methods are attractive as a \emph{scalable} approach for controlling partially observable Markov decision processes (POMDPs). In the most general case POMDP policies require some form of internal state, or memory, in order to act optimally. Policy-gradient methods have shown promise for problems admitting memory-less policies but have been less successful when memory is required. This thesis develops several improved algorithms for learning policies with memory in an infinite-horizon setting. Directly, when the dynamics of the world are known, and via Monte-Carlo methods otherwise. The algorithms simultaneously learn how to act and what to remember. ..
Policy-Gradient Algorithms for Partially Observable Markov Decision Processes
Partially observable Markov decision processes are interesting because of their ability to model most conceivable real-world learning problems, for example, robot navigation, driving a car, speech recognition, stock trading, and playing games. The downside of this generality is that exact algorithms are computationally intractable. Such computational complexity motivates approximate approaches. One such class of algorithms are the so-called policy-gradient methods from reinforcement learning. They seek to adjust the parameters of an agent in the direction that maximises the long-term average of a reward signal. Policy-gradient methods are attractive as a \emph{scalable} approach for controlling partially observable Markov decision processes (POMDPs). In the most general case POMDP policies require some form of internal state, or memory, in order to act optimally. Policy-gradient methods have shown promise for problems admitting memory-less policies but have been less successful when memory is required. This thesis develops several improved algorithms for learning policies with memory in an infinite-horizon setting. Directly, when the dynamics of the world are known, and via Monte-Carlo methods otherwise. The algorithms simultaneously learn how to act and what to remember. ..
Recommended from our members