14,513 research outputs found
Rotation-invariant features for multi-oriented text detection in natural images.
Texts in natural scenes carry rich semantic information, which can be used to assist a wide range of applications, such as object recognition, image/video retrieval, mapping/navigation, and human computer interaction. However, most existing systems are designed to detect and recognize horizontal (or near-horizontal) texts. Due to the increasing popularity of mobile-computing devices and applications, detecting texts of varying orientations from natural images under less controlled conditions has become an important but challenging task. In this paper, we propose a new algorithm to detect texts of varying orientations. Our algorithm is based on a two-level classification scheme and two sets of features specially designed for capturing the intrinsic characteristics of texts. To better evaluate the proposed method and compare it with the competing algorithms, we generate a comprehensive dataset with various types of texts in diverse real-world scenes. We also propose a new evaluation protocol, which is more suitable for benchmarking algorithms for detecting texts in varying orientations. Experiments on benchmark datasets demonstrate that our system compares favorably with the state-of-the-art algorithms when handling horizontal texts and achieves significantly enhanced performance on variant texts in complex natural scenes
Enhanced Characterness for Text Detection in the Wild
Text spotting is an interesting research problem as text may appear at any
random place and may occur in various forms. Moreover, ability to detect text
opens the horizons for improving many advanced computer vision problems. In
this paper, we propose a novel language agnostic text detection method
utilizing edge enhanced Maximally Stable Extremal Regions in natural scenes by
defining strong characterness measures. We show that a simple combination of
characterness cues help in rejecting the non text regions. These regions are
further fine-tuned for rejecting the non-textual neighbor regions.
Comprehensive evaluation of the proposed scheme shows that it provides
comparative to better generalization performance to the traditional methods for
this task
InLoc: Indoor Visual Localization with Dense Matching and View Synthesis
We seek to predict the 6 degree-of-freedom (6DoF) pose of a query photograph
with respect to a large indoor 3D map. The contributions of this work are
three-fold. First, we develop a new large-scale visual localization method
targeted for indoor environments. The method proceeds along three steps: (i)
efficient retrieval of candidate poses that ensures scalability to large-scale
environments, (ii) pose estimation using dense matching rather than local
features to deal with textureless indoor scenes, and (iii) pose verification by
virtual view synthesis to cope with significant changes in viewpoint, scene
layout, and occluders. Second, we collect a new dataset with reference 6DoF
poses for large-scale indoor localization. Query photographs are captured by
mobile phones at a different time than the reference 3D map, thus presenting a
realistic indoor localization scenario. Third, we demonstrate that our method
significantly outperforms current state-of-the-art indoor localization
approaches on this new challenging data
Text Localization in Video Using Multiscale Weber's Local Descriptor
In this paper, we propose a novel approach for detecting the text present in
videos and scene images based on the Multiscale Weber's Local Descriptor
(MWLD). Given an input video, the shots are identified and the key frames are
extracted based on their spatio-temporal relationship. From each key frame, we
detect the local region information using WLD with different radius and
neighborhood relationship of pixel values and hence obtained intensity enhanced
key frames at multiple scales. These multiscale WLD key frames are merged
together and then the horizontal gradients are computed using morphological
operations. The obtained results are then binarized and the false positives are
eliminated based on geometrical properties. Finally, we employ connected
component analysis and morphological dilation operation to determine the text
regions that aids in text localization. The experimental results obtained on
publicly available standard Hua, Horizontal-1 and Horizontal-2 video dataset
illustrate that the proposed method can accurately detect and localize texts of
various sizes, fonts and colors in videos.Comment: IEEE SPICES, 201
Object Referring in Visual Scene with Spoken Language
Object referring has important applications, especially for human-machine
interaction. While having received great attention, the task is mainly attacked
with written language (text) as input rather than spoken language (speech),
which is more natural. This paper investigates Object Referring with Spoken
Language (ORSpoken) by presenting two datasets and one novel approach. Objects
are annotated with their locations in images, text descriptions and speech
descriptions. This makes the datasets ideal for multi-modality learning. The
approach is developed by carefully taking down ORSpoken problem into three
sub-problems and introducing task-specific vision-language interactions at the
corresponding levels. Experiments show that our method outperforms competing
methods consistently and significantly. The approach is also evaluated in the
presence of audio noise, showing the efficacy of the proposed vision-language
interaction methods in counteracting background noise.Comment: 10 pages, Submitted to WACV 201
- …