99 research outputs found
WordSup: Exploiting Word Annotations for Character based Text Detection
Imagery texts are usually organized as a hierarchy of several visual
elements, i.e. characters, words, text lines and text blocks. Among these
elements, character is the most basic one for various languages such as
Western, Chinese, Japanese, mathematical expression and etc. It is natural and
convenient to construct a common text detection engine based on character
detectors. However, training character detectors requires a vast of location
annotated characters, which are expensive to obtain. Actually, the existing
real text datasets are mostly annotated in word or line level. To remedy this
dilemma, we propose a weakly supervised framework that can utilize word
annotations, either in tight quadrangles or the more loose bounding boxes, for
character detector training. When applied in scene text detection, we are thus
able to train a robust character detector by exploiting word annotations in the
rich large-scale real scene text datasets, e.g. ICDAR15 and COCO-text. The
character detector acts as a key role in the pipeline of our text detection
engine. It achieves the state-of-the-art performance on several challenging
scene text detection benchmarks. We also demonstrate the flexibility of our
pipeline by various scenarios, including deformed text detection and math
expression recognition.Comment: 2017 International Conference on Computer Visio
Development and validation of the global surface type data product from S-NPP VIIRS
Accurate representation of actual terrestrial surface types at regional to global scales is an important element for many applications. Based on National Aeronautics and Space Administration Moderate Resolution Imaging Spectroradiometer land cover algorithms, a global surface-type product from observations of the Visible Infrared Imaging Radiometer Suite (VIIRS) on board the Suomi National Polar-orbiting Partnership, provides consistent global land cover classification map for various studies, such as land surface modelling for numerical weather predictions, land management, biodiversity and hydrological modelling, and carbon and ecosystem studies. This letter introduces the development and validation of the VIIRS global surface-type product using the land cover classification scheme of the International Geosphere- Biosphere Programme. Surface reflectance data from VIIRS were composited into monthly data and then into annual metrics. The C5.0 decision tree classifier was used to determine the surface type for each pixel in a 1 km grid. To quantitatively evaluate accuracies of the new surface type product, a visual interpretation-based validation was performed in which high-resolution satellite images and other ancillary data were used as the reference. The validation result based on the large validation data set indicated that (78.64 ± 0.57)% overall classification accuracy was achieved
Development and validation of the global surface type data product from S-NPP VIIRS
Accurate representation of actual terrestrial surface types at regional to global scales is an important element for many applications. Based on National Aeronautics and Space Administration Moderate Resolution Imaging Spectroradiometer land cover algorithms, a global surface-type product from observations of the Visible Infrared Imaging Radiometer Suite (VIIRS) on board the Suomi National Polar-orbiting Partnership, provides consistent global land cover classification map for various studies, such as land surface modelling for numerical weather predictions, land management, biodiversity and hydrological modelling, and carbon and ecosystem studies. This letter introduces the development and validation of the VIIRS global surface-type product using the land cover classification scheme of the International Geosphere- Biosphere Programme. Surface reflectance data from VIIRS were composited into monthly data and then into annual metrics. The C5.0 decision tree classifier was used to determine the surface type for each pixel in a 1 km grid. To quantitatively evaluate accuracies of the new surface type product, a visual interpretation-based validation was performed in which high-resolution satellite images and other ancillary data were used as the reference. The validation result based on the large validation data set indicated that (78.64 ± 0.57)% overall classification accuracy was achieved
Xanthogranulomatous Inflammation of the Female Genital Tract: Report of Three Cases
Purpose and Methods: This is a series of three cases diagnosed with xanthogranulomatous inflammation of the female genital with emphasis on the etiology, clinical-pathologic features and biological behavior. Clinical, pathologic, radiologic and follow up data are reported
Decoupling Recognition from Detection: Single Shot Self-Reliant Scene Text Spotter
Typical text spotters follow the two-stage spotting strategy: detect the
precise boundary for a text instance first and then perform text recognition
within the located text region. While such strategy has achieved substantial
progress, there are two underlying limitations. 1) The performance of text
recognition depends heavily on the precision of text detection, resulting in
the potential error propagation from detection to recognition. 2) The RoI
cropping which bridges the detection and recognition brings noise from
background and leads to information loss when pooling or interpolating from
feature maps. In this work we propose the single shot Self-Reliant Scene Text
Spotter (SRSTS), which circumvents these limitations by decoupling recognition
from detection. Specifically, we conduct text detection and recognition in
parallel and bridge them by the shared positive anchor point. Consequently, our
method is able to recognize the text instances correctly even though the
precise text boundaries are challenging to detect. Additionally, our method
reduces the annotation cost for text detection substantially. Extensive
experiments on regular-shaped benchmark and arbitrary-shaped benchmark
demonstrate that our SRSTS compares favorably to previous state-of-the-art
spotters in terms of both accuracy and efficiency.Comment: To be appeared in the Proceedings of the ACM International Conference
on Multimedia (ACM MM), 202
- …