124 research outputs found
Feature Enhancement Network: A Refined Scene Text Detector
In this paper, we propose a refined scene text detector with a \textit{novel}
Feature Enhancement Network (FEN) for Region Proposal and Text Detection
Refinement. Retrospectively, both region proposal with \textit{only} sliding-window feature and text detection refinement with \textit{single
scale} high level feature are insufficient, especially for smaller scene text.
Therefore, we design a new FEN network with \textit{task-specific},
\textit{low} and \textit{high} level semantic features fusion to improve the
performance of text detection. Besides, since \textit{unitary}
position-sensitive RoI pooling in general object detection is unreasonable for
variable text regions, an \textit{adaptively weighted} position-sensitive RoI
pooling layer is devised for further enhancing the detecting accuracy. To
tackle the \textit{sample-imbalance} problem during the refinement stage, we
also propose an effective \textit{positives mining} strategy for efficiently
training our network. Experiments on ICDAR 2011 and 2013 robust text detection
benchmarks demonstrate that our method can achieve state-of-the-art results,
outperforming all reported methods in terms of F-measure.Comment: 8 pages, 5 figures, 2 tables. This paper is accepted to appear in
AAAI 201
EnsNet: Ensconce Text in the Wild
A new method is proposed for removing text from natural images. The challenge
is to first accurately localize text on the stroke-level and then replace it
with a visually plausible background. Unlike previous methods that require
image patches to erase scene text, our method, namely ensconce network
(EnsNet), can operate end-to-end on a single image without any prior knowledge.
The overall structure is an end-to-end trainable FCN-ResNet-18 network with a
conditional generative adversarial network (cGAN). The feature of the former is
first enhanced by a novel lateral connection structure and then refined by four
carefully designed losses: multiscale regression loss and content loss, which
capture the global discrepancy of different level features; texture loss and
total variation loss, which primarily target filling the text region and
preserving the reality of the background. The latter is a novel local-sensitive
GAN, which attentively assesses the local consistency of the text erased
regions. Both qualitative and quantitative sensitivity experiments on synthetic
images and the ICDAR 2013 dataset demonstrate that each component of the EnsNet
is essential to achieve a good performance. Moreover, our EnsNet can
significantly outperform previous state-of-the-art methods in terms of all
metrics. In addition, a qualitative experiment conducted on the SMBNet dataset
further demonstrates that the proposed method can also preform well on general
object (such as pedestrians) removal tasks. EnsNet is extremely fast, which can
preform at 333 fps on an i5-8600 CPU device.Comment: 8 pages, 8 figures, 2 tables, accepted to appear in AAAI 201
- …