1,174 research outputs found
Contextual Attention for Hand Detection in the Wild
We present Hand-CNN, a novel convolutional network architecture for detecting hand masks and predicting hand orientations in unconstrained images. Hand-CNN extends MaskRCNN with a novel attention mechanism to incorporate contextual cues in the detection process. This attention mechanism can be implemented as an efficient network module that captures non-local dependencies between features. This network module can be inserted at different stages of an object detection network, and the entire detector can be trained end-to-end. We also introduce large-scale annotated hand datasets containing hands in unconstrained images for training and evaluation. We show that Hand-CNN outperforms existing methods on the newly collected datasets and the publicly available PASCAL VOC human layout dataset. Data and code: https://www3.cs.stonybrook.edu/~cvl/projects/hand_det_attention
Contextual Attention for Hand Detection in the Wild
We present Hand-CNN, a novel convolutional network architecture for detecting
hand masks and predicting hand orientations in unconstrained images. Hand-CNN
extends MaskRCNN with a novel attention mechanism to incorporate contextual
cues in the detection process. This attention mechanism can be implemented as
an efficient network module that captures non-local dependencies between
features. This network module can be inserted at different stages of an object
detection network, and the entire detector can be trained end-to-end.
We also introduce a large-scale annotated hand dataset containing hands in
unconstrained images for training and evaluation. We show that Hand-CNN
outperforms existing methods on several datasets, including our hand detection
benchmark and the publicly available PASCAL VOC human layout challenge. We also
conduct ablation studies on hand detection to show the effectiveness of the
proposed contextual attention module.Comment: 9 pages, 9 figure
Cascaded Segmentation-Detection Networks for Word-Level Text Spotting
We introduce an algorithm for word-level text spotting that is able to
accurately and reliably determine the bounding regions of individual words of
text "in the wild". Our system is formed by the cascade of two convolutional
neural networks. The first network is fully convolutional and is in charge of
detecting areas containing text. This results in a very reliable but possibly
inaccurate segmentation of the input image. The second network (inspired by the
popular YOLO architecture) analyzes each segment produced in the first stage,
and predicts oriented rectangular regions containing individual words. No
post-processing (e.g. text line grouping) is necessary. With execution time of
450 ms for a 1000-by-560 image on a Titan X GPU, our system achieves the
highest score to date among published algorithms on the ICDAR 2015 Incidental
Scene Text dataset benchmark.Comment: 7 pages, 8 figure
- …