15,055 research outputs found
A Review on Intelligent Scene Text Recognition of Natural Images
This paper provides an algorithm for detection and reading of a particular text given in natural images. Scene text recognition has inspired a good interest for computer vision community in recent years. In this paper we proposed text recognition method integrating structure-guided character detection of natural images present in surroundings. From the dataset, we manually label and extract the text region. Then next we perform statistical analysis of the text region to determine which image features are reliable indicators of text and have low entropy.We use part-based tree structure to model each category of characters so as to detect and recognize characters simultaneously
Symmetrical Linguistic Feature Distillation with CLIP for Scene Text Recognition
In this paper, we explore the potential of the Contrastive Language-Image
Pretraining (CLIP) model in scene text recognition (STR), and establish a novel
Symmetrical Linguistic Feature Distillation framework (named CLIP-OCR) to
leverage both visual and linguistic knowledge in CLIP. Different from previous
CLIP-based methods mainly considering feature generalization on visual
encoding, we propose a symmetrical distillation strategy (SDS) that further
captures the linguistic knowledge in the CLIP text encoder. By cascading the
CLIP image encoder with the reversed CLIP text encoder, a symmetrical structure
is built with an image-to-text feature flow that covers not only visual but
also linguistic information for distillation.Benefiting from the natural
alignment in CLIP, such guidance flow provides a progressive optimization
objective from vision to language, which can supervise the STR feature
forwarding process layer-by-layer.Besides, a new Linguistic Consistency Loss
(LCL) is proposed to enhance the linguistic capability by considering
second-order statistics during the optimization. Overall, CLIP-OCR is the first
to design a smooth transition between image and text for the STR task.Extensive
experiments demonstrate the effectiveness of CLIP-OCR with 93.8% average
accuracy on six popular STR benchmarks.Code will be available at
https://github.com/wzx99/CLIPOCR.Comment: Accepted by ACM MM 202
Image processing for the extraction of nutritional information from food labels
Current techniques for tracking nutritional data require undesirable amounts of either time or man-power. People must choose between tediously recording and updating dietary information or depending on unreliable crowd-sourced or costly maintained databases. Our project looks to overcome these pitfalls by providing a programming interface for image analysis that will read and report the information present on a nutrition label directly. Our solution involves a C++ library that combines image pre-processing, optical character recognition, and post-processing techniques to pull the relevant information from an image of a nutrition label. We apply an understanding of a nutrition label\u27s content and data organization to approach the accuracy of traditional data-entry methods. Our system currently provides around 80% accuracy for most label images, and we will continue to work to improve our accuracy
Advancing Visual Grounding with Scene Knowledge: Benchmark and Method
Visual grounding (VG) aims to establish fine-grained alignment between vision
and language. Ideally, it can be a testbed for vision-and-language models to
evaluate their understanding of the images and texts and their reasoning
abilities over their joint space. However, most existing VG datasets are
constructed using simple description texts, which do not require sufficient
reasoning over the images and texts. This has been demonstrated in a recent
study~\cite{luo2022goes}, where a simple LSTM-based text encoder without
pretraining can achieve state-of-the-art performance on mainstream VG datasets.
Therefore, in this paper, we propose a novel benchmark of \underline{S}cene
\underline{K}nowledge-guided \underline{V}isual \underline{G}rounding (SK-VG),
where the image content and referring expressions are not sufficient to ground
the target objects, forcing the models to have a reasoning ability on the
long-form scene knowledge. To perform this task, we propose two approaches to
accept the triple-type input, where the former embeds knowledge into the image
features before the image-query interaction; the latter leverages linguistic
structure to assist in computing the image-text matching. We conduct extensive
experiments to analyze the above methods and show that the proposed approaches
achieve promising results but still leave room for improvement, including
performance and interpretability. The dataset and code are available at
\url{https://github.com/zhjohnchan/SK-VG}.Comment: Computer Vision and Natural Language Processing. 21 pages, 14
figures. CVPR-202
Context Perception Parallel Decoder for Scene Text Recognition
Scene text recognition (STR) methods have struggled to attain high accuracy
and fast inference speed. Autoregressive (AR)-based STR model uses the
previously recognized characters to decode the next character iteratively. It
shows superiority in terms of accuracy. However, the inference speed is slow
also due to this iteration. Alternatively, parallel decoding (PD)-based STR
model infers all the characters in a single decoding pass. It has advantages in
terms of inference speed but worse accuracy, as it is difficult to build a
robust recognition context in such a pass. In this paper, we first present an
empirical study of AR decoding in STR. In addition to constructing a new AR
model with the top accuracy, we find out that the success of AR decoder lies
also in providing guidance on visual context perception rather than language
modeling as claimed in existing studies. As a consequence, we propose Context
Perception Parallel Decoder (CPPD) to decode the character sequence in a single
PD pass. CPPD devises a character counting module and a character ordering
module. Given a text instance, the former infers the occurrence count of each
character, while the latter deduces the character reading order and
placeholders. Together with the character prediction task, they construct a
context that robustly tells what the character sequence is and where the
characters appear, well mimicking the context conveyed by AR decoding.
Experiments on both English and Chinese benchmarks demonstrate that CPPD models
achieve highly competitive accuracy. Moreover, they run approximately 7x faster
than their AR counterparts, and are also among the fastest recognizers. The
code will be released soon
Towards Unseen Triples: Effective Text-Image-joint Learning for Scene Graph Generation
Scene Graph Generation (SGG) aims to structurally and comprehensively
represent objects and their connections in images, it can significantly benefit
scene understanding and other related downstream tasks. Existing SGG models
often struggle to solve the long-tailed problem caused by biased datasets.
However, even if these models can fit specific datasets better, it may be hard
for them to resolve the unseen triples which are not included in the training
set. Most methods tend to feed a whole triple and learn the overall features
based on statistical machine learning. Such models have difficulty predicting
unseen triples because the objects and predicates in the training set are
combined differently as novel triples in the test set. In this work, we propose
a Text-Image-joint Scene Graph Generation (TISGG) model to resolve the unseen
triples and improve the generalisation capability of the SGG models. We propose
a Joint Fearture Learning (JFL) module and a Factual Knowledge based Refinement
(FKR) module to learn object and predicate categories separately at the feature
level and align them with corresponding visual features so that the model is no
longer limited to triples matching. Besides, since we observe the long-tailed
problem also affects the generalization ability, we design a novel balanced
learning strategy, including a Charater Guided Sampling (CGS) and an
Informative Re-weighting (IR) module, to provide tailor-made learning methods
for each predicate according to their characters. Extensive experiments show
that our model achieves state-of-the-art performance. In more detail, TISGG
boosts the performances by 11.7% of zR@20(zero-shot recall) on the PredCls
sub-task on the Visual Genome dataset
UniDoc: A Universal Large Multimodal Model for Simultaneous Text Detection, Recognition, Spotting and Understanding
In the era of Large Language Models (LLMs), tremendous strides have been made
in the field of multimodal understanding. However, existing advanced algorithms
are limited to effectively utilizing the immense representation capabilities
and rich world knowledge inherent to these large pre-trained models, and the
beneficial connections among tasks within the context of text-rich scenarios
have not been sufficiently explored. In this work, we introduce UniDoc, a novel
multimodal model equipped with text detection and recognition capabilities,
which are deficient in existing approaches. Moreover, UniDoc capitalizes on the
beneficial interactions among tasks to enhance the performance of each
individual task. To implement UniDoc, we perform unified multimodal instruct
tuning on the contributed large-scale instruction following datasets.
Quantitative and qualitative experimental results show that UniDoc sets
state-of-the-art scores across multiple challenging benchmarks. To the best of
our knowledge, this is the first large multimodal model capable of simultaneous
text detection, recognition, spotting, and understanding
Survey of the State of the Art in Natural Language Generation: Core tasks, applications and evaluation
This paper surveys the current state of the art in Natural Language
Generation (NLG), defined as the task of generating text or speech from
non-linguistic input. A survey of NLG is timely in view of the changes that the
field has undergone over the past decade or so, especially in relation to new
(usually data-driven) methods, as well as new applications of NLG technology.
This survey therefore aims to (a) give an up-to-date synthesis of research on
the core tasks in NLG and the architectures adopted in which such tasks are
organised; (b) highlight a number of relatively recent research topics that
have arisen partly as a result of growing synergies between NLG and other areas
of artificial intelligence; (c) draw attention to the challenges in NLG
evaluation, relating them to similar challenges faced in other areas of Natural
Language Processing, with an emphasis on different evaluation methods and the
relationships between them.Comment: Published in Journal of AI Research (JAIR), volume 61, pp 75-170. 118
pages, 8 figures, 1 tabl
- …