517 research outputs found
Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation
This paper proposes a new hybrid architecture that consists of a deep
Convolutional Network and a Markov Random Field. We show how this architecture
is successfully applied to the challenging problem of articulated human pose
estimation in monocular images. The architecture can exploit structural domain
constraints such as geometric relationships between body joint locations. We
show that joint training of these two model paradigms improves performance and
allows us to significantly outperform existing state-of-the-art techniques
Progressive One-shot Human Parsing
Prior human parsing models are limited to parsing humans into classes
pre-defined in the training data, which is not flexible to generalize to unseen
classes, e.g., new clothing in fashion analysis. In this paper, we propose a
new problem named one-shot human parsing (OSHP) that requires to parse human
into an open set of reference classes defined by any single reference example.
During training, only base classes defined in the training set are exposed,
which can overlap with part of reference classes. In this paper, we devise a
novel Progressive One-shot Parsing network (POPNet) to address two critical
challenges , i.e., testing bias and small sizes. POPNet consists of two
collaborative metric learning modules named Attention Guidance Module and
Nearest Centroid Module, which can learn representative prototypes for base
classes and quickly transfer the ability to unseen classes during testing,
thereby reducing testing bias. Moreover, POPNet adopts a progressive human
parsing framework that can incorporate the learned knowledge of parent classes
at the coarse granularity to help recognize the descendant classes at the fine
granularity, thereby handling the small sizes issue. Experiments on the ATR-OS
benchmark tailored for OSHP demonstrate POPNet outperforms other representative
one-shot segmentation models by large margins and establishes a strong
baseline. Source code can be found at
https://github.com/Charleshhy/One-shot-Human-Parsing.Comment: Accepted in AAAI 2021. 9 pages, 4 figure
Joint Inference in Weakly-Annotated Image Datasets via Dense Correspondence
We present a principled framework for inferring pixel labels in weakly-annotated image datasets. Most previous, example-based approaches to computer vision rely on a large corpus of densely labeled images. However, for large, modern image datasets, such labels are expensive to obtain and are often unavailable. We establish a large-scale graphical model spanning all labeled and unlabeled images, then solve it to infer pixel labels jointly for all images in the dataset while enforcing consistent annotations over similar visual patterns. This model requires significantly less labeled data and assists in resolving ambiguities by propagating inferred annotations from images with stronger local visual evidences to images with weaker local evidences. We apply our proposed framework to two computer vision problems, namely image annotation with semantic segmentation, and object discovery and co-segmentation (segmenting multiple images containing a common object). Extensive numerical evaluations and comparisons show that our method consistently outperforms the state-of-the-art in automatic annotation and semantic labeling, while requiring significantly less labeled data. In contrast to previous co-segmentation techniques, our method manages to discover and segment objects well even in the presence of substantial amounts of noise images (images not containing the common object), as typical for datasets collected from Internet search
- …