205 research outputs found
Generating Text Sequence Images for Recognition
Recently, methods based on deep learning have dominated the field of text
recognition. With a large number of training data, most of them can achieve the
state-of-the-art performances. However, it is hard to harvest and label
sufficient text sequence images from the real scenes. To mitigate this issue,
several methods to synthesize text sequence images were proposed, yet they
usually need complicated preceding or follow-up steps. In this work, we present
a method which is able to generate infinite training data without any auxiliary
pre/post-process. We tackle the generation task as an image-to-image
translation one and utilize conditional adversarial networks to produce
realistic text sequence images in the light of the semantic ones. Some
evaluation metrics are involved to assess our method and the results
demonstrate that the caliber of the data is satisfactory. The code and dataset
will be publicly available soon
Fused Text Segmentation Networks for Multi-oriented Scene Text Detection
In this paper, we introduce a novel end-end framework for multi-oriented
scene text detection from an instance-aware semantic segmentation perspective.
We present Fused Text Segmentation Networks, which combine multi-level features
during the feature extracting as text instance may rely on finer feature
expression compared to general objects. It detects and segments the text
instance jointly and simultaneously, leveraging merits from both semantic
segmentation task and region proposal based object detection task. Not
involving any extra pipelines, our approach surpasses the current state of the
art on multi-oriented scene text detection benchmarks: ICDAR2015 Incidental
Scene Text and MSRA-TD500 reaching Hmean 84.1% and 82.0% respectively. Morever,
we report a baseline on total-text containing curved text which suggests
effectiveness of the proposed approach.Comment: Accepted by ICPR201
MTRNet: A Generic Scene Text Eraser
Text removal algorithms have been proposed for uni-lingual scripts with
regular shapes and layouts. However, to the best of our knowledge, a generic
text removal method which is able to remove all or user-specified text regions
regardless of font, script, language or shape is not available. Developing such
a generic text eraser for real scenes is a challenging task, since it inherits
all the challenges of multi-lingual and curved text detection and inpainting.
To fill this gap, we propose a mask-based text removal network (MTRNet). MTRNet
is a conditional adversarial generative network (cGAN) with an auxiliary mask.
The introduced auxiliary mask not only makes the cGAN a generic text eraser,
but also enables stable training and early convergence on a challenging
large-scale synthetic dataset, initially proposed for text detection in real
scenes. What's more, MTRNet achieves state-of-the-art results on several
real-world datasets including ICDAR 2013, ICDAR 2017 MLT, and CTW1500, without
being explicitly trained on this data, outperforming previous state-of-the-art
methods trained directly on these datasets.Comment: Presented at ICDAR2019 Conferenc
COCO_TS Dataset: Pixel-level Annotations Based on Weak Supervision for Scene Text Segmentation
The absence of large scale datasets with pixel-level supervisions is a
significant obstacle for the training of deep convolutional networks for scene
text segmentation. For this reason, synthetic data generation is normally
employed to enlarge the training dataset. Nonetheless, synthetic data cannot
reproduce the complexity and variability of natural images. In this paper, a
weakly supervised learning approach is used to reduce the shift between
training on real and synthetic data. Pixel-level supervisions for a text
detection dataset (i.e. where only bounding-box annotations are available) are
generated. In particular, the COCO-Text-Segmentation (COCO_TS) dataset, which
provides pixel-level supervisions for the COCO-Text dataset, is created and
released. The generated annotations are used to train a deep convolutional
neural network for semantic segmentation. Experiments show that the proposed
dataset can be used instead of synthetic data, allowing us to use only a
fraction of the training samples and significantly improving the performances
Cascaded Segmentation-Detection Networks for Word-Level Text Spotting
We introduce an algorithm for word-level text spotting that is able to
accurately and reliably determine the bounding regions of individual words of
text "in the wild". Our system is formed by the cascade of two convolutional
neural networks. The first network is fully convolutional and is in charge of
detecting areas containing text. This results in a very reliable but possibly
inaccurate segmentation of the input image. The second network (inspired by the
popular YOLO architecture) analyzes each segment produced in the first stage,
and predicts oriented rectangular regions containing individual words. No
post-processing (e.g. text line grouping) is necessary. With execution time of
450 ms for a 1000-by-560 image on a Titan X GPU, our system achieves the
highest score to date among published algorithms on the ICDAR 2015 Incidental
Scene Text dataset benchmark.Comment: 7 pages, 8 figure
- …