1,068 research outputs found
Collecting The Puzzle Pieces: Disentangled Self-Driven Human Pose Transfer by Permuting Textures
Human pose transfer synthesizes new view(s) of a person for a given pose.
Recent work achieves this via self-reconstruction, which disentangles a
person's pose and texture information by breaking the person down into parts,
then recombines them for reconstruction. However, part-level disentanglement
preserves some pose information that can create unwanted artifacts. In this
paper, we propose Pose Transfer by Permuting Textures (PT), an approach for
self-driven human pose transfer that disentangles pose from texture at the
patch-level. Specifically, we remove pose from an input image by permuting
image patches so only texture information remains. Then we reconstruct the
input image by sampling from the permuted textures for patch-level
disentanglement. To reduce noise and recover clothing shape information from
the permuted patches, we employ encoders with multiple kernel sizes in a triple
branch network. On DeepFashion and Market-1501, PT reports significant
gains on automatic metrics over other self-driven methods, and even outperforms
some fully-supervised methods. A user study also reports images generated by
our method are preferred in 68% of cases over self-driven approaches from prior
work. Code is available at https://github.com/NannanLi999/pt_square.Comment: Accepted to ICCV 202
- …