1 research outputs found

    Pose Guided Attention for Multi-label Fashion Image Classification

    Full text link
    We propose a compact framework with guided attention for multi-label classification in the fashion domain. Our visual semantic attention model (VSAM) is supervised by automatic pose extraction creating a discriminative feature space. VSAM outperforms the state of the art for an in-house dataset and performs on par with previous works on the DeepFashion dataset, even without using any landmark annotations. Additionally, we show that our semantic attention module brings robustness to large quantities of wrong annotations and provides more interpretable results.Comment: Published at ICCV 2019 Workshop on Computer Vision for Fashion, Art and Desig
    corecore