42,549 research outputs found
Robust extraction of text from camera images using colour and spatial information simultaneously
The importance and use of text extraction from camera based coloured scene images is rapidly increasing with time. Text within a camera grabbed image can contain a huge amount of meta data about that scene. Such meta data can be useful for identification, indexing and retrieval purposes. While the segmentation and recognition of text from document images is quite successful, detection of coloured scene text is a new challenge for all camera based images. Common problems for text extraction from camera based images are the lack of prior knowledge of any kind of text features such as colour, font, size and orientation as well as the location of the probable text regions. In this paper, we document the development of a fully automatic and extremely robust text segmentation technique that can be used for any type of camera grabbed frame be it single image or video. A new algorithm is proposed which can overcome the current problems of text segmentation. The algorithm exploits text appearance in terms of colour and spatial distribution. When the new text extraction technique was tested on a variety of camera based images it was found to out perform existing techniques (or something similar). The proposed technique also overcomes any problems that can arise due to an unconstraint complex background. The novelty in the works arises from the fact that this is the first time that colour and spatial information are used simultaneously for the purpose of text extraction
Implementation of Adaptive Unsharp Masking as a pre-filtering method for watermark detection and extraction
Digital watermarking has been one of the focal points of research interests in order to provide multimedia security in the last decade. Watermark data, belonging to the user, are embedded on an original work such as text, audio, image, and video and thus, product ownership can be proved. Various robust watermarking algorithms have been developed in order to extract/detect the watermark against such attacks. Although watermarking algorithms in the transform domain differ from others by different combinations of transform techniques, it is difficult to decide on an algorithm for a specific application. Therefore, instead of developing a new watermarking algorithm with different combinations of transform techniques, we propose a novel and effective watermark extraction and detection method by pre-filtering, namely Adaptive Unsharp Masking (AUM). In spite of the fact that Unsharp Masking (UM) based pre-filtering is used for watermark extraction/detection in the literature by causing the details of the watermarked image become more manifest, effectiveness of UM may decrease in some cases of attacks. In this study, AUM has been proposed for pre-filtering as a solution to the disadvantages of UM. Experimental results show that AUM performs better up to 11\% in objective quality metrics than that of the results when pre-filtering is not used. Moreover; AUM proposed for pre-filtering in the transform domain image watermarking is as effective as that of used in image enhancement and can be applied in an algorithm-independent way for pre-filtering in transform domain image watermarking
OmniDataComposer: A Unified Data Structure for Multimodal Data Fusion and Infinite Data Generation
This paper presents OmniDataComposer, an innovative approach for multimodal
data fusion and unlimited data generation with an intent to refine and
uncomplicate interplay among diverse data modalities. Coming to the core
breakthrough, it introduces a cohesive data structure proficient in processing
and merging multimodal data inputs, which include video, audio, and text. Our
crafted algorithm leverages advancements across multiple operations such as
video/image caption extraction, dense caption extraction, Automatic Speech
Recognition (ASR), Optical Character Recognition (OCR), Recognize Anything
Model(RAM), and object tracking. OmniDataComposer is capable of identifying
over 6400 categories of objects, substantially broadening the spectrum of
visual information. It amalgamates these diverse modalities, promoting
reciprocal enhancement among modalities and facilitating cross-modal data
correction. \textbf{The final output metamorphoses each video input into an
elaborate sequential document}, virtually transmuting videos into thorough
narratives, making them easier to be processed by large language models. Future
prospects include optimizing datasets for each modality to encourage unlimited
data generation. This robust base will offer priceless insights to models like
ChatGPT, enabling them to create higher quality datasets for video captioning
and easing question-answering tasks based on video content. OmniDataComposer
inaugurates a new stage in multimodal learning, imparting enormous potential
for augmenting AI's understanding and generation of complex, real-world data
Rotation-invariant features for multi-oriented text detection in natural images.
Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal) texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes
Unconstrained Scene Text and Video Text Recognition for Arabic Script
Building robust recognizers for Arabic has always been challenging. We
demonstrate the effectiveness of an end-to-end trainable CNN-RNN hybrid
architecture in recognizing Arabic text in videos and natural scenes. We
outperform previous state-of-the-art on two publicly available video text
datasets - ALIF and ACTIV. For the scene text recognition task, we introduce a
new Arabic scene text dataset and establish baseline results. For scripts like
Arabic, a major challenge in developing robust recognizers is the lack of large
quantity of annotated data. We overcome this by synthesising millions of Arabic
text images from a large vocabulary of Arabic words and phrases. Our
implementation is built on top of the model introduced here [37] which is
proven quite effective for English scene text recognition. The model follows a
segmentation-free, sequence to sequence transcription approach. The network
transcribes a sequence of convolutional features from the input image to a
sequence of target labels. This does away with the need for segmenting input
image into constituent characters/glyphs, which is often difficult for Arabic
script. Further, the ability of RNNs to model contextual dependencies yields
superior recognition results.Comment: 5 page
Extracting textual overlays from social media videos using neural networks
Textual overlays are often used in social media videos as people who watch
them without the sound would otherwise miss essential information conveyed in
the audio stream. This is why extraction of those overlays can serve as an
important meta-data source, e.g. for content classification or retrieval tasks.
In this work, we present a robust method for extracting textual overlays from
videos that builds up on multiple neural network architectures. The proposed
solution relies on several processing steps: keyframe extraction, text
detection and text recognition. The main component of our system, i.e. the text
recognition module, is inspired by a convolutional recurrent neural network
architecture and we improve its performance using synthetically generated
dataset of over 600,000 images with text prepared by authors specifically for
this task. We also develop a filtering method that reduces the amount of
overlapping text phrases using Levenshtein distance and further boosts system's
performance. The final accuracy of our solution reaches over 80A% and is au
pair with state-of-the-art methods.Comment: International Conference on Computer Vision and Graphics (ICCVG) 201
- …