3,952 research outputs found

    An Arabic Optical Braille Recognition System

    No full text
    Technology has shown great promise in providing access to textual information for visually impaired people. Optical Braille Recognition (OBR) allows people with visual impairments to read volumes of typewritten documents with the help of flatbed scanners and OBR software. This project looks at developing a system to recognize an image of embossed Arabic Braille and then convert it to text. It particularly aims to build fully functional Optical Arabic Braille Recognition system. It has two main tasks, first is to recognize printed Braille cells, and second is to convert them to regular text. Converting Braille to text is not simply a one to one mapping, because one cell may represent one symbol (alphabet letter, digit, or special character), two or more symbols, or part of a symbol. Moreover, multiple cells may represent a single symbol

    IMPROVING THE EFFICIENCY OF TESSERACT OCR ENGINE

    Get PDF
    This project investigates the principles of optical character recognition used in the Tesseract OCR engine and techniques to improve its efficiency and runtime. Optical character recognition (OCR) method has been used in converting printed text into editable text in various applications over a variety of devices such as Scanners, computers, tablets etc. But now Mobile is taking over the computer in all the domains but OCR still remains one not so conquered field. So programmers need to improve the efficiency of the OCR system to make it run properly on Mobile devices. This paper focuses on improving the Tesseract OCR efficiency for Hindi language to run on Mobile devices as there a not many applications for the same and most of them are either not open source or not for mobile devices. Improving Hindi text extraction will increase Tesseract\u27s performance for Mobile phone apps and in turn will draw developers to contribute towards Hindi OCR . This paper presents a preprocessing technique being applied to the Tesseract Engine to improve the recognition of the characters keeping the runtime low. Hence the system runs smoothly and efficiently on mobile devices(Android) as it does on the bigger machines

    Visual pattern recognition using neural networks

    Get PDF
    Neural networks have been widely studied in a number of fields, such as neural architectures, neurobiology, statistics of neural network and pattern classification. In the field of pattern classification, neural network models are applied on numerous applications, for instance, character recognition, speech recognition, and object recognition. Among these, character recognition is commonly used to illustrate the feature and classification characteristics of neural networks. In this dissertation, the theoretical foundations of artificial neural networks are first reviewed and existing neural models are studied. The Adaptive Resonance Theory (ART) model is improved to achieve more reasonable classification results. Experiments in applying the improved model to image enhancement and printed character recognition are discussed and analyzed. We also study the theoretical foundation of Neocognitron in terms of feature extraction, convergence in training, and shift invariance. We investigate the use of multilayered perceptrons with recurrent connections as the general purpose modules for image operations in parallel architectures. The networks are trained to carry out classification rules in image transformation. The training patterns can be derived from user-defmed transformations or from loading the pair of a sample image and its target image when the prior knowledge of transformations is unknown. Applications of our model include image smoothing, enhancement, edge detection, noise removal, morphological operations, image filtering, etc. With a number of stages stacked up together we are able to apply a series of operations on the image. That is, by providing various sets of training patterns the system can adapt itself to the concatenated transformation. We also discuss and experiment in applying existing neural models, such as multilayered perceptron, to realize morphological operations and other commonly used imaging operations. Some new neural architectures and training algorithms for the implementation of morphological operations are designed and analyzed. The algorithms are proven correct and efficient. The proposed morphological neural architectures are applied to construct the feature extraction module of a personal handwritten character recognition system. The system was trained and tested with scanned image of handwritten characters. The feasibility and efficiency are discussed along with the experimental results

    Comparison between Feature Based and Deep Learning Recognition Systems for Handwriting Arabic Numbers

    Get PDF
    Feature extraction from images is an essential part of the recognition system. Calculating the appropriate features is critical to the part of the classification process. However, there are no standard features nor a widely accepted feature set exist applied to all applications, features must be application dependent. In contrast, deep learning extract features from an image without need for human hard-coding the features extraction process. This can be very useful to build a model for classification which can classify any type of images after trained with enough images with labels then the trained model can be used in different recognition applications to classify. This paper presents two techniques to build recognition system for Arabic handwriting numbers, the feature-based method shows accepted results. However, the deep learning method gives more accurate results and required less study on how Arabic number is written and no hand-coding algorithms needed for feature extraction to be used in the classification process. Keywords: Handwriting Recognition, Image Processing, Features Extraction, Machine Learning, Deep Learning, Classification

    Improved Preprocessing Strategy under Different Obscure Weather Conditions for Augmenting Automatic License Plate Recognition

    Get PDF
    Automatic license plate recognition (ALPR) systems are widely used for various applications, including traffic control, law enforcement, and toll collection. However, the performance of ALPR systems is often compromised in challenging weather and lighting conditions. This research aims to improve the effectiveness of ALPR systems in foggy, low-light, and rainy weather conditions using a hybrid preprocessing methodology. The research proposes the combination of dark channel prior (DCP), non-local means denoising (NMD) technique, and adaptive histogram equalization (AHE) algorithms in CIELAB color space. And used the Python programming language comparisons for SSIM and PSNR performance. The results showed that this hybrid approach is not merely robust to a variety of challenging conditions, including challenging weather and lighting conditions but significantly more accurate for existing ALPR systems

    An Examination of Character Recognition on ID card using Template Matching Approach

    Get PDF
    AbstractIdentification card (ID cards) becomes the main reference in obtaining information of a citizen. Some business sectors require the information contained in the ID card to perform the registration process. In general, the registration process is still using a form to be filled in accordance with the data on the ID card, which will then be converted into digital data by means of retyping the information. The purpose of this research is to create a character recognition system on the ID card where the character recognition process included into four stages: pre-processing, text-area extraction, segmentation and recognition. The experiment includes some tests of greyscale, binarization and segmentation algorithm, as well as the combination of those algorithms. Text area extractor showed satisfactory results of identifying text-area on the ID card, which can scope all the entire area that consist of text. In the segmentation stage, approximately 93% of character can be cut off correctly. The actual character will be mapped to the template character using two algorithms where the division grid of each of them is different. Nevertheless, the recognition process of applying the template method matching still needs to be improved back

    Localization and recognition of the scoreboard in sports video based on SIFT point matching

    Get PDF
    In broadcast sports video, the scoreboard is attached at a fixed location in the video and generally the scoreboard always exists in all video frames in order to help viewers to understand the matchā€™s progression quickly. Based on these observations, we present a new localization and recognition method for scoreboard text in sport videos in this paper. The method first matches the Scale Invariant Feature Transform (SIFT) points using a modified matching technique between two frames extracted from a video clip and then localizes the scoreboard by computing a robust estimate of the matched point cloud in a two-stage non-scoreboard filter process based on some domain rules. Next some enhancement operations are performed on the localized scoreboard, and a Multi-frame Voting Decision is used. Both aim to increasing the OCR rate. Experimental results demonstrate the effectiveness and efficiency of our proposed method
    • ā€¦
    corecore