179 research outputs found
Compressed Video Action Recognition
Training robust deep video representations has proven to be much more
challenging than learning deep image representations. This is in part due to
the enormous size of raw video streams and the high temporal redundancy; the
true and interesting signal is often drowned in too much irrelevant data.
Motivated by that the superfluous information can be reduced by up to two
orders of magnitude by video compression (using H.264, HEVC, etc.), we propose
to train a deep network directly on the compressed video.
This representation has a higher information density, and we found the
training to be easier. In addition, the signals in a compressed video provide
free, albeit noisy, motion information. We propose novel techniques to use them
effectively. Our approach is about 4.6 times faster than Res3D and 2.7 times
faster than ResNet-152. On the task of action recognition, our approach
outperforms all the other methods on the UCF-101, HMDB-51, and Charades
dataset.Comment: CVPR 2018 (Selected for spotlight presentation
Multiple-Question Multiple-Answer Text-VQA
We present Multiple-Question Multiple-Answer (MQMA), a novel approach to do
text-VQA in encoder-decoder transformer models. The text-VQA task requires a
model to answer a question by understanding multi-modal content: text
(typically from OCR) and an associated image. To the best of our knowledge,
almost all previous approaches for text-VQA process a single question and its
associated content to predict a single answer. In order to answer multiple
questions from the same image, each question and content are fed into the model
multiple times. In contrast, our proposed MQMA approach takes multiple
questions and content as input at the encoder and predicts multiple answers at
the decoder in an auto-regressive manner at the same time. We make several
novel architectural modifications to standard encoder-decoder transformers to
support MQMA. We also propose a novel MQMA denoising pre-training task which is
designed to teach the model to align and delineate multiple questions and
content with associated answers. MQMA pre-trained model achieves
state-of-the-art results on multiple text-VQA datasets, each with strong
baselines. Specifically, on OCR-VQA (+2.5%), TextVQA (+1.4%), ST-VQA (+0.6%),
DocVQA (+1.1%) absolute improvements over the previous state-of-the-art
approaches
A Fast Alignment Scheme for Automatic OCR Evaluation of Books
This paper aims to evaluate the accuracy of optical character recognition (OCR) systems on real scanned books. The ground truth e-texts are obtained from the Project Gutenberg website and aligned with their corresponding OCR output using a fast recursive text alignment scheme (RETAS). First, unique words in the vocabulary of the book are aligned with unique words in the OCR output. This process is recursively applied to each text segment in between matching unique words until the text segments become very small. In the final stage, an edit distance based alignment algorithm is used to align these short chunks of texts to generate the final alignment. The proposed approach effectively segments the alignment problem into small subproblems which in turn yields dramatic time savings even when there are large pieces of inserted or deleted text and the OCR accuracy is poor. This approach is used to evaluate the OCR accuracy of real scanned books in English, French, German and Spanish
- …