58,589 research outputs found
Robust extraction of text from camera images using colour and spatial information simultaneously
The importance and use of text extraction from camera based coloured scene images is rapidly increasing with time. Text within a camera grabbed image can contain a huge amount of meta data about that scene. Such meta data can be useful for identification, indexing and retrieval purposes. While the segmentation and recognition of text from document images is quite successful, detection of coloured scene text is a new challenge for all camera based images. Common problems for text extraction from camera based images are the lack of prior knowledge of any kind of text features such as colour, font, size and orientation as well as the location of the probable text regions. In this paper, we document the development of a fully automatic and extremely robust text segmentation technique that can be used for any type of camera grabbed frame be it single image or video. A new algorithm is proposed which can overcome the current problems of text segmentation. The algorithm exploits text appearance in terms of colour and spatial distribution. When the new text extraction technique was tested on a variety of camera based images it was found to out perform existing techniques (or something similar). The proposed technique also overcomes any problems that can arise due to an unconstraint complex background. The novelty in the works arises from the fact that this is the first time that colour and spatial information are used simultaneously for the purpose of text extraction
Arabic cursive text recognition from natural scene images
© 2019 by the authors. This paper presents a comprehensive survey on Arabic cursive scene text recognition. The recent years' publications in this field have witnessed the interest shift of document image analysis researchers from recognition of optical characters to recognition of characters appearing in natural images. Scene text recognition is a challenging problem due to the text having variations in font styles, size, alignment, orientation, reflection, illumination change, blurriness and complex background. Among cursive scripts, Arabic scene text recognition is contemplated as a more challenging problem due to joined writing, same character variations, a large number of ligatures, the number of baselines, etc. Surveys on the Latin and Chinese script-based scene text recognition system can be found, but the Arabic like scene text recognition problem is yet to be addressed in detail. In this manuscript, a description is provided to highlight some of the latest techniques presented for text classification. The presented techniques following a deep learning architecture are equally suitable for the development of Arabic cursive scene text recognition systems. The issues pertaining to text localization and feature extraction are also presented. Moreover, this article emphasizes the importance of having benchmark cursive scene text dataset. Based on the discussion, future directions are outlined, some of which may provide insight about cursive scene text to researchers
SNeL: A Structured Neuro-Symbolic Language for Entity-Based Multimodal Scene Understanding
In the evolving landscape of artificial intelligence, multimodal and
Neuro-Symbolic paradigms stand at the forefront, with a particular emphasis on
the identification and interaction with entities and their relations across
diverse modalities. Addressing the need for complex querying and interaction in
this context, we introduce SNeL (Structured Neuro-symbolic Language), a
versatile query language designed to facilitate nuanced interactions with
neural networks processing multimodal data. SNeL's expressive interface enables
the construction of intricate queries, supporting logical and arithmetic
operators, comparators, nesting, and more. This allows users to target specific
entities, specify their properties, and limit results, thereby efficiently
extracting information from a scene. By aligning high-level symbolic reasoning
with low-level neural processing, SNeL effectively bridges the Neuro-Symbolic
divide. The language's versatility extends to a variety of data types,
including images, audio, and text, making it a powerful tool for multimodal
scene understanding. Our evaluations demonstrate SNeL's potential to reshape
the way we interact with complex neural networks, underscoring its efficacy in
driving targeted information extraction and facilitating a deeper understanding
of the rich semantics encapsulated in multimodal AI models
Pixel Adapter: A Graph-Based Post-Processing Approach for Scene Text Image Super-Resolution
Current Scene text image super-resolution approaches primarily focus on
extracting robust features, acquiring text information, and complex training
strategies to generate super-resolution images. However, the upsampling module,
which is crucial in the process of converting low-resolution images to
high-resolution ones, has received little attention in existing works. To
address this issue, we propose the Pixel Adapter Module (PAM) based on graph
attention to address pixel distortion caused by upsampling. The PAM effectively
captures local structural information by allowing each pixel to interact with
its neighbors and update features. Unlike previous graph attention mechanisms,
our approach achieves 2-3 orders of magnitude improvement in efficiency and
memory utilization by eliminating the dependency on sparse adjacency matrices
and introducing a sliding window approach for efficient parallel computation.
Additionally, we introduce the MLP-based Sequential Residual Block (MSRB) for
robust feature extraction from text images, and a Local Contour Awareness loss
() to enhance the model's perception of details.
Comprehensive experiments on TextZoom demonstrate that our proposed method
generates high-quality super-resolution images, surpassing existing methods in
recognition accuracy. For single-stage and multi-stage strategies, we achieved
improvements of 0.7\% and 2.6\%, respectively, increasing the performance from
52.6\% and 53.7\% to 53.3\% and 56.3\%. The code is available at
https://github.com/wenyu1009/RTSRN
Unconstrained Scene Text and Video Text Recognition for Arabic Script
Building robust recognizers for Arabic has always been challenging. We
demonstrate the effectiveness of an end-to-end trainable CNN-RNN hybrid
architecture in recognizing Arabic text in videos and natural scenes. We
outperform previous state-of-the-art on two publicly available video text
datasets - ALIF and ACTIV. For the scene text recognition task, we introduce a
new Arabic scene text dataset and establish baseline results. For scripts like
Arabic, a major challenge in developing robust recognizers is the lack of large
quantity of annotated data. We overcome this by synthesising millions of Arabic
text images from a large vocabulary of Arabic words and phrases. Our
implementation is built on top of the model introduced here [37] which is
proven quite effective for English scene text recognition. The model follows a
segmentation-free, sequence to sequence transcription approach. The network
transcribes a sequence of convolutional features from the input image to a
sequence of target labels. This does away with the need for segmenting input
image into constituent characters/glyphs, which is often difficult for Arabic
script. Further, the ability of RNNs to model contextual dependencies yields
superior recognition results.Comment: 5 page
Unsupervised Text Extraction from G-Maps
This paper represents an text extraction method from Google maps, GIS
maps/images. Due to an unsupervised approach there is no requirement of any
prior knowledge or training set about the textual and non-textual parts. Fuzzy
CMeans clustering technique is used for image segmentation and Prewitt method
is used to detect the edges. Connected component analysis and gridding
technique enhance the correctness of the results. The proposed method reaches
98.5% accuracy level on the basis of experimental data sets.Comment: Proc. IEEE Conf. #30853, International Conference on Human Computer
Interactions (ICHCI'13), Chennai, India, 23-24 Aug., 201
Extracting textual overlays from social media videos using neural networks
Textual overlays are often used in social media videos as people who watch
them without the sound would otherwise miss essential information conveyed in
the audio stream. This is why extraction of those overlays can serve as an
important meta-data source, e.g. for content classification or retrieval tasks.
In this work, we present a robust method for extracting textual overlays from
videos that builds up on multiple neural network architectures. The proposed
solution relies on several processing steps: keyframe extraction, text
detection and text recognition. The main component of our system, i.e. the text
recognition module, is inspired by a convolutional recurrent neural network
architecture and we improve its performance using synthetically generated
dataset of over 600,000 images with text prepared by authors specifically for
this task. We also develop a filtering method that reduces the amount of
overlapping text phrases using Levenshtein distance and further boosts system's
performance. The final accuracy of our solution reaches over 80A% and is au
pair with state-of-the-art methods.Comment: International Conference on Computer Vision and Graphics (ICCVG) 201
Rotation-invariant features for multi-oriented text detection in natural images.
Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal) texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes
- …