175,597 research outputs found
Word Searching in Scene Image and Video Frame in Multi-Script Scenario using Dynamic Shape Coding
Retrieval of text information from natural scene images and video frames is a
challenging task due to its inherent problems like complex character shapes,
low resolution, background noise, etc. Available OCR systems often fail to
retrieve such information in scene/video frames. Keyword spotting, an
alternative way to retrieve information, performs efficient text searching in
such scenarios. However, current word spotting techniques in scene/video images
are script-specific and they are mainly developed for Latin script. This paper
presents a novel word spotting framework using dynamic shape coding for text
retrieval in natural scene image and video frames. The framework is designed to
search query keyword from multiple scripts with the help of on-the-fly
script-wise keyword generation for the corresponding script. We have used a
two-stage word spotting approach using Hidden Markov Model (HMM) to detect the
translated keyword in a given text line by identifying the script of the line.
A novel unsupervised dynamic shape coding based scheme has been used to group
similar shape characters to avoid confusion and to improve text alignment.
Next, the hypotheses locations are verified to improve retrieval performance.
To evaluate the proposed system for searching keyword from natural scene image
and video frames, we have considered two popular Indic scripts such as Bangla
(Bengali) and Devanagari along with English. Inspired by the zone-wise
recognition approach in Indic scripts[1], zone-wise text information has been
used to improve the traditional word spotting performance in Indic scripts. For
our experiment, a dataset consisting of images of different scenes and video
frames of English, Bangla and Devanagari scripts were considered. The results
obtained showed the effectiveness of our proposed word spotting approach.Comment: Multimedia Tools and Applications, Springe
Real-time automatic multilevel color video thresholding using a novel class-variance criterion
[[abstract]]Color image segmentation is a crucial preliminary task in robotic vision systems. This paper presents a novel automatic multilevel color thresholding algorithm to address this task efficiently. The proposed algorithm consists of a learning process and a multi-threshold searching process. The learning process learns the color distribution of an input video sequence in HSV color space, and the multi-threshold searching process automatically determines the optimal multiple thresholds to segment all colors-of-interest in the video based on a novel class-variance criterion. For the learning process, a simple and efficient color-distribution learning algorithm operating with a color-pixel extraction method is proposed to learn a color distribution model of all colors-of-interest in the video images, which simplifies the search for optimal thresholds for the colors-of-interest through a conventional multilevel thresholding method. For the multi-threshold searching process, a nonparametric multilevel color thresholding algorithm with an extended within-class variance criterion is proposed to automatically find the optimal upper bound and lower bound threshold values of each color channel. Experimental results validate the performance and computational efficiency of the proposed method by comparing with three existing methods, both visually and quantitatively.[[booktype]]ç´™
Efficient Robustness Assessment via Adversarial Spatial-Temporal Focus on Videos
Adversarial robustness assessment for video recognition models has raised
concerns owing to their wide applications on safety-critical tasks. Compared
with images, videos have much high dimension, which brings huge computational
costs when generating adversarial videos. This is especially serious for the
query-based black-box attacks where gradient estimation for the threat models
is usually utilized, and high dimensions will lead to a large number of
queries. To mitigate this issue, we propose to simultaneously eliminate the
temporal and spatial redundancy within the video to achieve an effective and
efficient gradient estimation on the reduced searching space, and thus query
number could decrease. To implement this idea, we design the novel Adversarial
spatial-temporal Focus (AstFocus) attack on videos, which performs attacks on
the simultaneously focused key frames and key regions from the inter-frames and
intra-frames in the video. AstFocus attack is based on the cooperative
Multi-Agent Reinforcement Learning (MARL) framework. One agent is responsible
for selecting key frames, and another agent is responsible for selecting key
regions. These two agents are jointly trained by the common rewards received
from the black-box threat models to perform a cooperative prediction. By
continuously querying, the reduced searching space composed of key frames and
key regions is becoming precise, and the whole query number becomes less than
that on the original video. Extensive experiments on four mainstream video
recognition models and three widely used action recognition datasets
demonstrate that the proposed AstFocus attack outperforms the SOTA methods,
which is prevenient in fooling rate, query number, time, and perturbation
magnitude at the same.Comment: accepted by TPAMI202
Video Logo Retrieval based on local Features
Estimation of the frequency and duration of logos in videos is important and
challenging in the advertisement industry as a way of estimating the impact of
ad purchases. Since logos occupy only a small area in the videos, the popular
methods of image retrieval could fail. This paper develops an algorithm called
Video Logo Retrieval (VLR), which is an image-to-video retrieval algorithm
based on the spatial distribution of local image descriptors that measure the
distance between the query image (the logo) and a collection of video images.
VLR uses local features to overcome the weakness of global feature-based models
such as convolutional neural networks (CNN). Meanwhile, VLR is flexible and
does not require training after setting some hyper-parameters. The performance
of VLR is evaluated on two challenging open benchmark tasks (SoccerNet and
Standford I2V), and compared with other state-of-the-art logo retrieval or
detection algorithms. Overall, VLR shows significantly higher accuracy compared
with the existing methods.Comment: Accepted by ICIP 20. Contact author: Bochen Guan ([email protected]
Content-Based Video Retrieval in Historical Collections of the German Broadcasting Archive
The German Broadcasting Archive (DRA) maintains the cultural heritage of
radio and television broadcasts of the former German Democratic Republic (GDR).
The uniqueness and importance of the video material stimulates a large
scientific interest in the video content. In this paper, we present an
automatic video analysis and retrieval system for searching in historical
collections of GDR television recordings. It consists of video analysis
algorithms for shot boundary detection, concept classification, person
recognition, text recognition and similarity search. The performance of the
system is evaluated from a technical and an archival perspective on 2,500 hours
of GDR television recordings.Comment: TPDL 2016, Hannover, Germany. Final version is available at Springer
via DO
- …