7 research outputs found
Creating Capsule Wardrobes from Fashion Images
We propose to automatically create capsule wardrobes. Given an inventory of
candidate garments and accessories, the algorithm must assemble a minimal set
of items that provides maximal mix-and-match outfits. We pose the task as a
subset selection problem. To permit efficient subset selection over the space
of all outfit combinations, we develop submodular objective functions capturing
the key ingredients of visual compatibility, versatility, and user-specific
preference. Since adding garments to a capsule only expands its possible
outfits, we devise an iterative approach to allow near-optimal submodular
function maximization. Finally, we present an unsupervised approach to learn
visual compatibility from "in the wild" full body outfit photos; the
compatibility metric translates well to cleaner catalog photos and improves
over existing methods. Our results on thousands of pieces from popular fashion
websites show that automatic capsule creation has potential to mimic skilled
fashionistas in assembling flexible wardrobes, while being significantly more
scalable.Comment: Accepted to CVPR 201
CLOTH3D: Clothed 3D Humans
This work presents CLOTH3D, the first big scale synthetic dataset of 3D
clothed human sequences. CLOTH3D contains a large variability on garment type,
topology, shape, size, tightness and fabric. Clothes are simulated on top of
thousands of different pose sequences and body shapes, generating realistic
cloth dynamics. We provide the dataset with a generative model for cloth
generation. We propose a Conditional Variational Auto-Encoder (CVAE) based on
graph convolutions (GCVAE) to learn garment latent spaces. This allows for
realistic generation of 3D garments on top of SMPL model for any pose and
shape
LARGE SCALE VISUAL RECOGNITION OF CLOTHING, PEOPLE AND STYLES
Clothing recognition is a societally and commercially important yet extremely challenging problem due to large variations in clothing appearance, layering, style, body shape and pose. In this dissertation, we propose new computational vision approaches that learn to represent and recognize clothing items in images. First, we present an effective method for parsing clothing in fashion photographs, where we label the regions of an image with their clothing categories. We then extend our approach to tackle the clothing parsing problem using a data-driven methodology: for a query image, we find similar styles from a large database of tagged fashion images and use these examples to recognize clothing items in the query. Along with our novel large fashion dataset, we also present intriguing initial results on using clothing estimates to improve human pose identification. Second, we examine questions related to fashion styles and identifying the clothing elements associated with each style. We first design an online competitive style rating game called Hipster Wars to crowd source reliable human judgments of clothing styles. We use this game to collect a new dataset of clothing outfits with associated style ratings for different clothing styles. Next, we build visual style descriptors and train models that are able to classify clothing styles and identify the clothing elements are most discriminative in every style. Finally, we define a new task, Exact Street to Shop, where our goal is to match a real-world example of a garment item to the same exact garment in an online shop. This is an extremely challenging task due to visual differences between street photos that are taken of people wearing clothing in everyday uncontrolled settings, and online shop photos, which are captured by professionals in highly controlled settings. We introduce a novel large dataset for this application, collected from the web, and present a deep learning based similarity network that can compare clothing items across visual domains.Doctor of Philosoph