13 research outputs found

    Parallel processing applied to image mosaic generation

    Get PDF
    The automatic construction of large mosaics obtained from high resolution digital images is an area of great importance, with applications in different areas. In agriculture, the requirements of cartographic accuracy of mosaics of annual or perennial crops are not so high, but the speed in obtaining them is the most critical factor. The efficiency in decision making is related to the obtaining faster and more accurate information, especially in the control of pests, diseases or fire control. This project proposes a methodology based on SIFT Transform and parallel processing to build mosaics automatically, using high resolution agricultural aerial images. Build mosaics with high resolution images requires high computational effort for processing them. To treat the problem of computational effort, the standard OpenMP of parallel processing was used to accelerate the process and results are presented for a computer with 2, 4 and 8 threads

    Image and Video Analytics for Document Processing and Event Recognition

    Get PDF
    The proliferation of handheld devices with cameras is among many changes in the past several decades which affected the document image analysis community by providing a far less constrained document imaging experience compared to traditional non-portable flatbed scanners. Although these devices provide more flexibility in capturing, the users now have to consider numerous environmental challenges including 1) a limited field-of-view keeping users from acquiring a high-quality images of large sources in a single frame, 2) Light reflections on glossy surfaces that result in saturated regions, and 3) Crumpled or non-planar documents that cannot be captured effectively from a single pose. Another change is the application of deep neural networks such as the deep convolutional neural networks (CNNs) for text analysis which is showing unprecedented performance over the classical approaches. Beginning with the success in character recognition, CNNs have shown their strength in many tasks in document analysis as well as computer vision. Researchers have explored potential applicability of CNNs for tasks such as text detection and segmentation, and have been quite successful. These networks, trained to perform single tasks, have recently evolved to handle multiple tasks. This introduces several important challenges including imposing multiple tasks on single architecture network and integrating multiple architectures with different tasks. In this dissertation, we make contributions in both of these areas. First, we propose a novel Graphcut-based document image mosaicking method which seeks to overcome the known limitations of the previous approaches. Our method does not require any prior knowledge of the content of the document images, making it more widely applicable and robust. Information regarding the geometrical disposition between the overlapping images is exploited to minimize the errors at the boundary regions. We incorporate a sharpness measure which induces cut generation in a way that results in the mosaic including the sharpest pixels. Our method is shown to outperform previous methods, both quantitatively and qualitatively. Second, we address the problem of removing highlight regions caused by the light sources reflecting off glossy surfaces in indoor environments. We devise an efficient method to detect and remove the highlights from the target scene by jointly estimating separate homographies for the target scene and the highlights. Our method is based on the observation that when given two images captured at different viewpoints, the displacement of the target scene is different from that of the highlight regions. We show the effectiveness of our method in removing the highlight reflections by comparing it with the related state-of-the-art methods. Unlike the previous methods, our method has the ability to handle saturated and relatively large highlights which completely obscure the content underneath. Third, we address the problem of selecting instances of a planar object in a video or set of images based on an evaluation of its "frontalness". We introduce the idea of "evaluating the frontalness" by computing how close the object's surface normal aligns with the optical axis of a camera. The unique and novel aspect of our method is that unlike previous planar object pose estimation methods, our method does not require a frontal reference image. The intuition is that a true frontal image can be used to reproduce other non-frontal images by perspective projection, while the non-frontal images have limited ability to do so. We show comparing 'frontal' and 'non-frontal' can be extended to compare 'more frontal' and 'less frontal' images. Based on this observation, our method estimates the relative frontalness of an image by exploiting the objective space error. We also propose the use of a K-invariant space to evaluate the frontalness even when the camera intrinsic parameters are unknown (e.g., images/videos from the web). Our method improves the accuracy over a baseline method. Lastly, we address the problem of integrating multiple deep neural networks (specifically CNNs) with different architectures and different tasks into a unified framework. To demonstrate the end-to-end integration of networks with different tasks and different architecture, we select event recognition and object detection. One of the novel aspects of our approach is that this is the first attempt to exploit the power of deep convolutional neural networks to directly integrate relevant object information into a unified network to improve event recognition performance. Our architecture allows the sharing of the convolutional layers and a fully connected layer which effectively integrates event recognition with the rigid and non-rigid object detection

    CleanPage: Fast and Clean Document and Whiteboard Capture

    Get PDF
    The move from paper to online is not only necessary for remote working, it is also significantly more sustainable. This trend has seen a rising need for the high-quality digitization of content from pages and whiteboards to sharable online material. However, capturing this information is not always easy nor are the results always satisfactory. Available scanning apps vary in their usability and do not always produce clean results, retaining surface imperfections from the page or whiteboard in their output images. CleanPage, a novel smartphone-based document and whiteboard scanning system, is presented. CleanPage requires one button-tap to capture, identify, crop, and clean an image of a page or whiteboard. Unlike equivalent systems, no user intervention is required during processing, and the result is a high-contrast, low-noise image with a clean homogenous background. Results are presented for a selection of scenarios showing the versatility of the design. CleanPage is compared with two market leader scanning apps using two testing approaches: real paper scans and ground-truth comparisons. These comparisons are achieved by a new testing methodology that allows scans to be compared to unscanned counterparts by using synthesized images. Real paper scans are tested using image quality measures. An evaluation of standard image quality assessments is included in this work, and a novel quality measure for scanned images is proposed and validated. The user experience for each scanning app is assessed, showing CleanPage to be fast and easier to use

    CleanPage: Fast and Clean Document and Whiteboard Capture

    Get PDF
    The move from paper to online is not only necessary for remote working, it is also significantly more sustainable. This trend has seen a rising need for the high-quality digitization of content from pages and whiteboards to sharable online material. However, capturing this information is not always easy nor are the results always satisfactory. Available scanning apps vary in their usability and do not always produce clean results, retaining surface imperfections from the page or whiteboard in their output images. CleanPage, a novel smartphone-based document and whiteboard scanning system, is presented. CleanPage requires one button-tap to capture, identify, crop, and clean an image of a page or whiteboard. Unlike equivalent systems, no user intervention is required during processing, and the result is a high-contrast, low-noise image with a clean homogenous background. Results are presented for a selection of scenarios showing the versatility of the design. CleanPage is compared with two market leader scanning apps using two testing approaches: real paper scans and ground-truth comparisons. These comparisons are achieved by a new testing methodology that allows scans to be compared to unscanned counterparts by using synthesized images. Real paper scans are tested using image quality measures. An evaluation of standard image quality assessments is included in this work, and a novel quality measure for scanned images is proposed and validated. The user experience for each scanning app is assessed, showing CleanPage to be fast and easier to use

    텍스트와 특징점 기반의 목적함수 최적화를 이용한 문서와 텍스트 평활화 기법

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2014. 8. 조남익.There are many techniques and applications that detect and recognize text information in the images, e.g., document retrieval using the camera-captured document image, book reader for visually impaired, and augmented reality based on text recognition. In these applications, the planar surfaces which contain the text are often distorted in the captured image due to the perspective view (e.g., road signs), curvature (e.g., unfolded books), and wrinkles (e.g., old documents). Specifically, recovering the original document texture by removing these distortions from the camera-captured document images is called the document rectification. In this dissertation, new text surface rectification algorithms are proposed, for improving text recognition accuracy and visual quality. The proposed methods are categorized into 3 types depending on the types of the input. The contributions of the proposed methods can be summarized as follows. In the first rectification algorithm, the dense text-lines in the documents are employed to rectify the images. Unlike the conventional approaches, the proposed method does not directly use the text-line. Instead, the proposed method use the discrete representation of text-lines and text-blocks which are the sets of connected components. Also, the geometric distortion caused by page curl and perspective view are modeled as generalized cylindrical surfaces and camera rotation respectively. With these distortion model and discrete representation of the features, a cost function whose minimization yields parameters of the distortion model is developed. In the cost function, the properties of the pages such as text-block alignment, line-spacing, and the straightness of text-lines are encoded. By describing the text features using the sets of discrete points, the cost function can be easily defined and well solved by Levenberg-Marquadt algorithm. Experiments show that the proposed method works well for the various layouts and curved surfaces, and compares favorably with the conventional methods on the standard dataset. The second algorithm is a unified framework to rectify and stitch multiple document images using visual feature points instead of text lines. This is similar to the method employed in general image stitching algorithm. However, the general image stitching algorithm usually assumes fixed center of camera, which is not taken for granted in capturing the document. To deal with the camera motion between images, a new parametric family of motion model is proposed in this dissertation. Besides, to remove the ambiguity in the reference plane, a new cost function is developed to impose the constraints on the reference plane. This enables the estimation of physically correct reference plane without prior knowledge. The estimated reference plane can also be used to rectify the stitching result. Furthermore, the proposed method can be applied to any other planar object such as building facades or mural paintings as well as the camera-captured document image since it employs the general features. The third rectification method is based on scene text detection algorithm, which is independent from the language model. The conventional methods assume that a character consists of a single connected component (CC) like English alphabet. However, this assumption is brittle in the Asian characters such as Korean, Chinese, and Japanese, where a single character consists of several CCs. Therefore, it is difficult to divide CCs into text lines without language model. To alleviate this problem, the proposed method clusters the candidate regions based on the similarity measure considering inter-character relation. The adjacency measure is trained on the data set labeled with the bounding box of text region. Non-text regions that remain after clustering are filtered out in text/non-text classification step. Final text regions are merged or divided into each text line considering the orientation and location. The detected text is rectified using the orientation of text-line and vertical strokes. The proposed method outperforms state-of-the-art algorithms in English as well as Asian characters in the extensive experiments.1 Introduction 1 1.1 Document rectification via text-line based optimization . . . . . . . 2 1.2 A unified approach of rectification and stitching for document images 4 1.3 Rectification via scene text detection . . . . . . . . . . . . . . . . . . 5 1.4 Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2 Related work 9 2.1 Document rectification . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.1.1 Document dewarping without text-lines . . . . . . . . . . . . 9 2.1.2 Document dewarping with text-lines . . . . . . . . . . . . . . 10 2.1.3 Text-block identification and text-line extraction . . . . . . . 11 2.2 Document stitching . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.3 Scene text detection . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3 Document rectification based on text-lines 15 3.1 Proposed approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.1.1 Image acquisition model . . . . . . . . . . . . . . . . . . . . . 16 3.1.2 Proposed approach to document dewarping . . . . . . . . . . 18 3.2 Proposed cost function and its optimization . . . . . . . . . . . . . . 22 3.2.1 Design of Estr(·) . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.2.2 Minimization of Estr(·) . . . . . . . . . . . . . . . . . . . . . 23 3.2.3 Alignment type classification . . . . . . . . . . . . . . . . . . 28 3.2.4 Design of Ealign(·) . . . . . . . . . . . . . . . . . . . . . . . . 29 3.2.5 Design of Espacing(·) . . . . . . . . . . . . . . . . . . . . . . . 31 3.3 Extension to unfolded book surfaces . . . . . . . . . . . . . . . . . . 32 3.4 Experimental result . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 3.4.1 Experiments on synthetic data . . . . . . . . . . . . . . . . . 36 3.4.2 Experiments on real images . . . . . . . . . . . . . . . . . . . 39 3.4.3 Comparison with existing methods . . . . . . . . . . . . . . . 43 3.4.4 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4 Document rectification based on feature detection 49 4.1 Proposed approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.2 Proposed cost function and its optimization . . . . . . . . . . . . . . 51 4.2.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.2.2 Homography between the i-th image and E . . . . . . . . . 52 4.2.3 Proposed cost function . . . . . . . . . . . . . . . . . . . . . . 53 4.2.4 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.2.5 Relation to the model in [17] . . . . . . . . . . . . . . . . . . 55 4.3 Post-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.3.1 Classification of two cases . . . . . . . . . . . . . . . . . . . . 56 4.3.2 Skew removal . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.4 Experimental results . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.4.1 Quantitative evaluation on metric reconstruction performance 57 4.4.2 Experiments on real images . . . . . . . . . . . . . . . . . . . 58 5 Scene text detection and rectification 67 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.1.1 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.1.2 Proposed approach . . . . . . . . . . . . . . . . . . . . . . . . 69 5.2 Candidate region detection . . . . . . . . . . . . . . . . . . . . . . . 70 5.2.1 CC extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 5.2.2 Computation of similarity between CCs . . . . . . . . . . . . 70 5.2.3 CC clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 5.3 Rectification of candidate region . . . . . . . . . . . . . . . . . . . . 73 5.4 Text/non-text classification . . . . . . . . . . . . . . . . . . . . . . . 76 5.5 Experimental result . . . . . . . . . . . . . . . . . . . . . . . . . . . 80 5.5.1 Experimental results on ICDAR 2011 dataset . . . . . . . . . 80 5.5.2 Experimental results on the Asian character dataset . . . . . 80 6 Conclusion 83 Bibliography 87 Abstract (Korean) 97Docto

    Formación de imagen completa de una página con texto impreso mediante procesamiento de imágenes obtenidas de un video

    Get PDF
    En la presente tesis se aborda el diseño e implementación de un algoritmo que permite formar la imagen completa de un documento con texto impreso partiendo de un video que contiene fragmentos de la página en cuestión. Dicho algoritmo recibe como entrada un video registrado empleando la cámara de un teléfono móvil y como resultado retornará la imagen del documento con texto completo; esta imagen puede ser empleada posteriormente en un algoritmo de reconocimiento óptico de caracteres (u OCR por sus siglas en inglés) para recuperar el texto en forma digital. El enfoque del desarrollo de esta propuesta es el de brindar una solución alternativa, en cuanto a adquisición de imágenes, para las existentes aplicaciones móviles de OCR enfocadas en apoyar a personas con ceguera parcial o total. Para abarcar el planteamiento y cumplimiento de los objetivos de este proyecto, se ha estructurado el mismo en 4 capítulos. En el capítulo 1 se aborda la actual situación de personas con distintos grados de discapacidad visual en nuestro país y diversos sistemas que buscan apoyarlos en recuperar su autonomía informativa y educativa. Además, se trata detalles sobre el estado del arte en adquisición de imágenes para las aplicaciones OCR existentes en la actualidad y sus falencias. En el capítulo 2 se presenta el marco teórico que avala el desarrollo del algoritmo propuesto, desde la teoría necesaria en procesamiento de imágenes y, también, sobre el registro de vídeos. En el capítulo 3 se trata el diseño e implementación del algoritmo en dos plataformas: inicialmente en Python 3.6 para la etapa de calibración de parámetros en una computadora de escritorio, y en C++ para las pruebas finales en un teléfono con SO Android. En dicho capítulo también se hace presente consideraciones planteadas para la creación del conjunto de videos de pruebas en Python. Finalmente, en el capítulo 4 se exponen las pruebas y resultados obtenidos de la aplicación del algoritmo, en Python, sobre la base de muestras creadas, y los resultados finales del uso de la aplicación en Android. Para estimar el grado de conformidad de la imagen resultante se hará uso de la métrica de Levenshtein o distancia de edición, la cual señala cuántos caracteres detectados en la imagen compuesta son diferentes a los caracteres del texto original.Tesi

    Processing Camera-captured Document Images: Geometric Rectification, Mosaicing, and Layout Structure Recognition

    Get PDF
    This dissertation explores three topics: 1) geometric rectification of cameracaptured document images, 2) camera-captured document mosaicing, and 3) layout structure recognition. The first two topics pertain to camera-based document image analysis, a new trend within the OCR community. Compared to typical scanners,cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. The third topic is related to the need for efficient metadata extraction methods, critical for managing digitized documents. The kernel of our geometric rectification framework is a novel method for estimating document shape from a single camera-captured image. Our method uses texture flows detected in printed text areas and is insensitive to occlusion. Classification of planar versus curved documents is done automatically. For planar pages, we obtain full metric rectification. For curved pages, we estimate a planar-strip approximation based on properties of developable surfaces. Our method can process any planar or smoothly curved document captured from an arbitrary position without requiring 3D data, metric data, or camera calibration. For the second topic, we design a novel registration method for document images, which produces good results in difficult situations including large displacements, severe projective distortion, small overlapping areas, and lack of distinguishable feature points. We implement a selective image composition method that outperforms conventional image blending methods in overlapping areas. It eliminates double images caused by mis-registration and preserves the sharpness in overlapping areas. We solve the third topic with a graph-based model matching framework. Layout structures are modeled by graphs, which integrate local and global features and are extensible to new features in the future. Our model can handle large variation within a class and subtle differences between classes. Through graph matching, the layout structure of a document is discovered. Our layout structure recognition technique accomplishes document classification and logical component labeling at the same time. Our model learning method enables a model to adapt to changes in classes over time
    corecore