45 research outputs found

    Enhanced Characterness for Text Detection in the Wild

    Full text link
    Text spotting is an interesting research problem as text may appear at any random place and may occur in various forms. Moreover, ability to detect text opens the horizons for improving many advanced computer vision problems. In this paper, we propose a novel language agnostic text detection method utilizing edge enhanced Maximally Stable Extremal Regions in natural scenes by defining strong characterness measures. We show that a simple combination of characterness cues help in rejecting the non text regions. These regions are further fine-tuned for rejecting the non-textual neighbor regions. Comprehensive evaluation of the proposed scheme shows that it provides comparative to better generalization performance to the traditional methods for this task

    STEFANN: Scene Text Editor using Font Adaptive Neural Network

    Full text link
    Textual information in a captured scene plays an important role in scene interpretation and decision making. Though there exist methods that can successfully detect and interpret complex text regions present in a scene, to the best of our knowledge, there is no significant prior work that aims to modify the textual information in an image. The ability to edit text directly on images has several advantages including error correction, text restoration and image reusability. In this paper, we propose a method to modify text in an image at character-level. We approach the problem in two stages. At first, the unobserved character (target) is generated from an observed character (source) being modified. We propose two different neural network architectures - (a) FANnet to achieve structural consistency with source font and (b) Colornet to preserve source color. Next, we replace the source character with the generated character maintaining both geometric and visual consistency with neighboring characters. Our method works as a unified platform for modifying text in images. We present the effectiveness of our method on COCO-Text and ICDAR datasets both qualitatively and quantitatively.Comment: Accepted in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 202

    Design and development of DrawBot using image processing

    Get PDF
    Extracting text from an image and reproducing them can often be a laborious task. We took it upon ourselves to solve the problem. Our work is aimed at designing a robot which can perceive an image shown to it and reproduce it on any given area as directed. It does so by first taking an input image and performing image processing operations on the image to improve its readability. Then the text in the image is recognized by the program. Points for each letter are taken, then inverse kinematics is done for each point with MATLAB/Simulink and the angles in which the servo motors should be moved are found out and stored in the Arduino. Using these angles, the control algorithm is generated in the Arduino and the letters are drawn

    Matlab Code For Identification Of Graphics Objects In Aircraft Displays

    Get PDF
    This paper is aimed at understanding, utilizing and improving the existing system and automating the graphics testing process. The paper involves developing an automation design for automating the graphics testing process involved in software testing of aircraft displays. The paper comes under software development; there are so many steps in software development like Requirement analysis, Design, Implementation, Testing and evolution. So testing is the one of the step which comes under software development. We are designing an automation tool in graphics testing. Graphic testing tests the display devices. The main motivation of this project is to save the time because current approach involves user to read each and every message box which gets popped up while executing script. This approach involves a lot of manual effort and a person has to physically present and do the test execution and also there is possibility of mistakes when a non-trained person working on it. So to make initial test set up as fast as possible, with no errors and with no manual effort we are developing an automation tool. The tool includes techniques of image comparison, optical character recognition and template matching. Our automation should be able to handle all kinds of text and digits. Implementing a design which is able to recognize characters which pops up from window and takes a decision of pass/fail based on the recognized characters, while doing testing automatically, for that we use optical character recognition and template matching is mainly used for object recognition. Image comparison is used to capture an image and do the executions process automatically

    Backpropagation Neural Network for Book Classification Using the Image Cover

    Get PDF
    Artificial Neural Networks are known to provide a good model forclassification. The goal of this research is to classify books in Bahasa (Bahasa Indonesia) using its cover. The data is in the form of scanned images, each with the size of 300 cm height, 130 cm width, and 96 dpi image resolution the research conducted features extraction using image processing method, MSER (Maximally Stable Externally Regions) to identify the area of book title, and Tesseract Optical Character Recognition (OCR) to detect the title. Next, features extracted from MSER and OCR are converted into a numerical matrix as the input to the Backpropagation Artificial Neural Network. The accuracy obtained using one hidden layer and 15 neurons is 63.31%. Meanwhile, the evaluation using 2 hidden layers with a combination of 15 and 35 neurons resulted in accuracy of 79.89%. The ability of the model to classify the book was affected by the image quality, variation, and number of training data
    corecore