9,955 research outputs found
Contextual Action Recognition with R*CNN
There are multiple cues in an image which reveal what action a person is
performing. For example, a jogger has a pose that is characteristic for
jogging, but the scene (e.g. road, trail) and the presence of other joggers can
be an additional source of information. In this work, we exploit the simple
observation that actions are accompanied by contextual cues to build a strong
action recognition system. We adapt RCNN to use more than one region for
classification while still maintaining the ability to localize the action. We
call our system R*CNN. The action-specific models and the feature maps are
trained jointly, allowing for action specific representations to emerge. R*CNN
achieves 90.2% mean AP on the PASAL VOC Action dataset, outperforming all other
approaches in the field by a significant margin. Last, we show that R*CNN is
not limited to action recognition. In particular, R*CNN can also be used to
tackle fine-grained tasks such as attribute classification. We validate this
claim by reporting state-of-the-art performance on the Berkeley Attributes of
People dataset
Scene Graph Generation by Iterative Message Passing
Understanding a visual scene goes beyond recognizing individual objects in
isolation. Relationships between objects also constitute rich semantic
information about the scene. In this work, we explicitly model the objects and
their relationships using scene graphs, a visually-grounded graphical structure
of an image. We propose a novel end-to-end model that generates such structured
scene representation from an input image. The model solves the scene graph
inference problem using standard RNNs and learns to iteratively improves its
predictions via message passing. Our joint inference model can take advantage
of contextual cues to make better predictions on objects and their
relationships. The experiments show that our model significantly outperforms
previous methods for generating scene graphs using Visual Genome dataset and
inferring support relations with NYU Depth v2 dataset.Comment: CVPR 201
Loss Guided Activation for Action Recognition in Still Images
One significant problem of deep-learning based human action recognition is
that it can be easily misled by the presence of irrelevant objects or
backgrounds. Existing methods commonly address this problem by employing
bounding boxes on the target humans as part of the input, in both training and
testing stages. This requirement of bounding boxes as part of the input is
needed to enable the methods to ignore irrelevant contexts and extract only
human features. However, we consider this solution is inefficient, since the
bounding boxes might not be available. Hence, instead of using a person
bounding box as an input, we introduce a human-mask loss to automatically guide
the activations of the feature maps to the target human who is performing the
action, and hence suppress the activations of misleading contexts. We propose a
multi-task deep learning method that jointly predicts the human action class
and human location heatmap. Extensive experiments demonstrate our approach is
more robust compared to the baseline methods under the presence of irrelevant
misleading contexts. Our method achieves 94.06\% and 40.65\% (in terms of mAP)
on Stanford40 and MPII dataset respectively, which are 3.14\% and 12.6\%
relative improvements over the best results reported in the literature, and
thus set new state-of-the-art results. Additionally, unlike some existing
methods, we eliminate the requirement of using a person bounding box as an
input during testing.Comment: Accepted to appear in ACCV 201
- …