192 research outputs found
A Book Reader Design for Persons with Visual Impairment and Blindness
The objective of this dissertation is to provide a new design approach to a fully automated book reader for individuals with visual impairment and blindness that is portable and cost effective. This approach relies on the geometry of the design setup and provides the mathematical foundation for integrating, in a unique way, a 3-D space surface map from a low-resolution time of flight (ToF) device with a high-resolution image as means to enhance the reading accuracy of warped images due to the page curvature of bound books and other magazines. The merits of this low cost, but effective automated book reader design include: (1) a seamless registration process of the two imaging modalities so that the low resolution (160 x 120 pixels) height map, acquired by an Argos3D-P100 camera, accurately covers the entire book spread as captured by the high resolution image (3072 x 2304 pixels) of a Canon G6 Camera; (2) a mathematical framework for overcoming the difficulties associated with the curvature of open bound books, a process referred to as the dewarping of the book spread images, and (3) image correction performance comparison between uniform and full height map to determine which map provides the highest Optical Character Recognition (OCR) reading accuracy possible. The design concept could also be applied to address the challenging process of book digitization. This method is dependent on the geometry of the book reader setup for acquiring a 3-D map that yields high reading accuracy once appropriately fused with the high-resolution image. The experiments were performed on a dataset consisting of 200 pages with their corresponding computed and co-registered height maps, which are made available to the research community (cate-book3dmaps.fiu.edu). Improvements to the characters reading accuracy, due to the correction steps, were quantified and measured by introducing the corrected images to an OCR engine and tabulating the number of miss-recognized characters. Furthermore, the resilience of the book reader was tested by introducing a rotational misalignment to the book spreads and comparing the OCR accuracy to those obtained with the standard alignment. The standard alignment yielded an average reading accuracy of 95.55% with the uniform height map (i.e., the height values of the central row of the 3-D map are replicated to approximate all other rows), and 96.11% with the full height maps (i.e., each row has its own height values as obtained from the 3D camera). When the rotational misalignments were taken into account, the results obtained produced average accuracies of 90.63% and 94.75% for the same respective height maps, proving added resilience of the full height map method to potential misalignments
Development of a text reading system on video images
Since the early days of computer science researchers sought to devise a machine which could automatically read text to help people with visual impairments. The problem of extracting and recognising text on document images has been largely resolved, but reading text from images of natural scenes remains a challenge. Scene text can present uneven lighting, complex backgrounds or perspective and lens distortion; it usually appears as short sentences or isolated words and shows a very diverse set of typefaces. However, video sequences of natural scenes provide a temporal redundancy that can be exploited to compensate for some of these deficiencies. Here we present a complete end-to-end, real-time scene text reading system on video images based on perspective aware text tracking.
The main contribution of this work is a system that automatically detects, recognises and tracks text in videos of natural scenes in real-time. The focus of our method is on large text found in outdoor environments, such as shop signs, street names and billboards. We introduce novel efficient techniques for text detection, text aggregation and text perspective estimation. Furthermore, we propose using a set of Unscented Kalman Filters (UKF) to maintain each text regionÂżs identity and to continuously track the homography transformation of the text into a fronto-parallel view, thereby being resilient to erratic camera motion and wide baseline changes in orientation. The orientation of each text line is estimated using a method that relies on the geometry of the characters themselves to estimate a rectifying homography. This is done irrespective of the view of the text over a large range of orientations. We also demonstrate a wearable head-mounted device for text reading that encases a camera for image acquisition and a pair of headphones for synthesized speech output.
Our system is designed for continuous and unsupervised operation over long periods of time. It is completely automatic and features quick failure recovery and interactive text reading. It is also highly parallelised in order to maximize the usage of available processing power and to achieve real-time operation. We show comparative results that improve the current state-of-the-art when correcting perspective deformation of scene text. The end-to-end system performance is demonstrated on sequences recorded in outdoor scenarios. Finally, we also release a dataset of text tracking videos along with the annotated ground-truth of text regions
Deep Unrestricted Document Image Rectification
In recent years, tremendous efforts have been made on document image
rectification, but existing advanced algorithms are limited to processing
restricted document images, i.e., the input images must incorporate a complete
document. Once the captured image merely involves a local text region, its
rectification quality is degraded and unsatisfactory. Our previously proposed
DocTr, a transformer-assisted network for document image rectification, also
suffers from this limitation. In this work, we present DocTr++, a novel unified
framework for document image rectification, without any restrictions on the
input distorted images. Our major technical improvements can be concluded in
three aspects. Firstly, we upgrade the original architecture by adopting a
hierarchical encoder-decoder structure for multi-scale representation
extraction and parsing. Secondly, we reformulate the pixel-wise mapping
relationship between the unrestricted distorted document images and the
distortion-free counterparts. The obtained data is used to train our DocTr++
for unrestricted document image rectification. Thirdly, we contribute a
real-world test set and metrics applicable for evaluating the rectification
quality. To our best knowledge, this is the first learning-based method for the
rectification of unrestricted document images. Extensive experiments are
conducted, and the results demonstrate the effectiveness and superiority of our
method. We hope our DocTr++ will serve as a strong baseline for generic
document image rectification, prompting the further advancement and application
of learning-based algorithms. The source code and the proposed dataset are
publicly available at https://github.com/fh2019ustc/DocTr-Plus
MataDoc: Margin and Text Aware Document Dewarping for Arbitrary Boundary
Document dewarping from a distorted camera-captured image is of great value
for OCR and document understanding. The document boundary plays an important
role which is more evident than the inner region in document dewarping. Current
learning-based methods mainly focus on complete boundary cases, leading to poor
document correction performance of documents with incomplete boundaries. In
contrast to these methods, this paper proposes MataDoc, the first method
focusing on arbitrary boundary document dewarping with margin and text aware
regularizations. Specifically, we design the margin regularization by
explicitly considering background consistency to enhance boundary perception.
Moreover, we introduce word position consistency to keep text lines straight in
rectified document images. To produce a comprehensive evaluation of MataDoc, we
propose a novel benchmark ArbDoc, mainly consisting of document images with
arbitrary boundaries in four typical scenarios. Extensive experiments confirm
the superiority of MataDoc with consideration for the incomplete boundary on
ArbDoc and also demonstrate the effectiveness of the proposed method on
DocUNet, DIR300, and WarpDoc datasets.Comment: 12 page
HoughNet: neural network architecture for vanishing points detection
In this paper we introduce a novel neural network architecture based on Fast
Hough Transform layer. The layer of this type allows our neural network to
accumulate features from linear areas across the entire image instead of local
areas. We demonstrate its potential by solving the problem of vanishing points
detection in the images of documents. Such problem occurs when dealing with
camera shots of the documents in uncontrolled conditions. In this case, the
document image can suffer several specific distortions including projective
transform. To train our model, we use MIDV-500 dataset and provide testing
results. The strong generalization ability of the suggested method is proven
with its applying to a completely different ICDAR 2011 dewarping contest. In
previously published papers considering these dataset authors measured the
quality of vanishing point detection by counting correctly recognized words
with open OCR engine Tesseract. To compare with them, we reproduce this
experiment and show that our method outperforms the state-of-the-art result.Comment: 6 pages, 6 figures, 2 tables, 28 references, conferenc
DocTr: Document Image Transformer for Geometric Unwarping and Illumination Correction
In this work, we propose a new framework, called Document Image Transformer
(DocTr), to address the issue of geometry and illumination distortion of the
document images. Specifically, DocTr consists of a geometric unwarping
transformer and an illumination correction transformer. By setting a set of
learned query embedding, the geometric unwarping transformer captures the
global context of the document image by self-attention mechanism and decodes
the pixel-wise displacement solution to correct the geometric distortion. After
geometric unwarping, our illumination correction transformer further removes
the shading artifacts to improve the visual quality and OCR accuracy. Extensive
evaluations are conducted on several datasets, and superior results are
reported against the state-of-the-art methods. Remarkably, our DocTr achieves
20.02% Character Error Rate (CER), a 15% absolute improvement over the
state-of-the-art methods. Moreover, it also shows high efficiency on running
time and parameter count. The results will be available at
https://github.com/fh2019ustc/DocTr for further comparison.Comment: This paper has been accepted by ACM Multimedia 202
- …