25,387 research outputs found
Dynamic Low-Resolution Distillation for Cost-Efficient End-to-End Text Spotting
End-to-end text spotting has attached great attention recently due to its
benefits on global optimization and high maintainability for real applications.
However, the input scale has always been a tough trade-off since recognizing a
small text instance usually requires enlarging the whole image, which brings
high computational costs. In this paper, to address this problem, we propose a
novel cost-efficient Dynamic Low-resolution Distillation (DLD) text spotting
framework, which aims to infer images in different small but recognizable
resolutions and achieve a better balance between accuracy and efficiency.
Concretely, we adopt a resolution selector to dynamically decide the input
resolutions for different images, which is constraint by both inference
accuracy and computational cost. Another sequential knowledge distillation
strategy is conducted on the text recognition branch, making the low-res input
obtains comparable performance to a high-res image. The proposed method can be
optimized end-to-end and adopted in any current text spotting framework to
improve the practicability. Extensive experiments on several text spotting
benchmarks show that the proposed method vastly improves the usability of
low-res models. The code is available at
https://github.com/hikopensource/DAVAR-Lab-OCR/.Comment: Accept by ECCV202
Aggregated Text Transformer for Scene Text Detection
This paper explores the multi-scale aggregation strategy for scene text
detection in natural images. We present the Aggregated Text TRansformer(ATTR),
which is designed to represent texts in scene images with a multi-scale
self-attention mechanism. Starting from the image pyramid with multiple
resolutions, the features are first extracted at different scales with shared
weight and then fed into an encoder-decoder architecture of Transformer. The
multi-scale image representations are robust and contain rich information on
text contents of various sizes. The text Transformer aggregates these features
to learn the interaction across different scales and improve text
representation. The proposed method detects scene texts by representing each
text instance as an individual binary mask, which is tolerant of curve texts
and regions with dense instances. Extensive experiments on public scene text
detection datasets demonstrate the effectiveness of the proposed framework
MANGO: A Mask Attention Guided One-Stage Scene Text Spotter
Recently end-to-end scene text spotting has become a popular research topic
due to its advantages of global optimization and high maintainability in real
applications. Most methods attempt to develop various region of interest (RoI)
operations to concatenate the detection part and the sequence recognition part
into a two-stage text spotting framework. However, in such framework, the
recognition part is highly sensitive to the detected results (e.g.), the
compactness of text contours). To address this problem, in this paper, we
propose a novel Mask AttentioN Guided One-stage text spotting framework named
MANGO, in which character sequences can be directly recognized without RoI
operation. Concretely, a position-aware mask attention module is developed to
generate attention weights on each text instance and its characters. It allows
different text instances in an image to be allocated on different feature map
channels which are further grouped as a batch of instance features. Finally, a
lightweight sequence decoder is applied to generate the character sequences. It
is worth noting that MANGO inherently adapts to arbitrary-shaped text spotting
and can be trained end-to-end with only coarse position information (e.g.),
rectangular bounding box) and text annotations. Experimental results show that
the proposed method achieves competitive and even new state-of-the-art
performance on both regular and irregular text spotting benchmarks, i.e., ICDAR
2013, ICDAR 2015, Total-Text, and SCUT-CTW1500.Comment: Accepted to AAAI2021. Code is available at
https://davar-lab.github.io/publication.html or
https://github.com/hikopensource/DAVAR-Lab-OC
PBFormer: Capturing Complex Scene Text Shape with Polynomial Band Transformer
We present PBFormer, an efficient yet powerful scene text detector that
unifies the transformer with a novel text shape representation Polynomial Band
(PB). The representation has four polynomial curves to fit a text's top,
bottom, left, and right sides, which can capture a text with a complex shape by
varying polynomial coefficients. PB has appealing features compared with
conventional representations: 1) It can model different curvatures with a fixed
number of parameters, while polygon-points-based methods need to utilize a
different number of points. 2) It can distinguish adjacent or overlapping texts
as they have apparent different curve coefficients, while segmentation-based or
points-based methods suffer from adhesive spatial positions. PBFormer combines
the PB with the transformer, which can directly generate smooth text contours
sampled from predicted curves without interpolation. A parameter-free
cross-scale pixel attention (CPA) module is employed to highlight the feature
map of a suitable scale while suppressing the other feature maps. The simple
operation can help detect small-scale texts and is compatible with the
one-stage DETR framework, where no postprocessing exists for NMS. Furthermore,
PBFormer is trained with a shape-contained loss, which not only enforces the
piecewise alignment between the ground truth and the predicted curves but also
makes curves' positions and shapes consistent with each other. Without bells
and whistles about text pre-training, our method is superior to the previous
state-of-the-art text detectors on the arbitrary-shaped text datasets.Comment: 9 pages, 8 figures, accepted by ACM MM 202
Automating the construction of scene classifiers for content-based video retrieval
This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a two stage procedure. First, small image fragments called patches are classified. Second, frequency vectors of these patch classifications are fed into a second classifier for global scene classification (e.g., city, portraits, or countryside). The first stage classifiers can be seen as a set of highly specialized, learned feature detectors, as an alternative to letting an image processing expert determine features a priori. We present results for experiments on a variety of patch and image classes. The scene classifier has been used successfully within television archives and for Internet porn filtering
- …