404 research outputs found
Learning context-aware outfit recommendation
With the rapid development and increasing popularity of online shopping for fashion products, fashion recommendation plays an important role in daily online shopping scenes. Fashion is not only a commodity that is bought and sold but is also a visual language of sign, a nonverbal communication medium that exists between the wearers and viewers in a community. The key to fashion recommendation is to capture the semantics behind customers’ fit feedback as well as fashion visual style. Existing methods have been developed with the item similarity demonstrated by user interactions like ratings and purchases. By identifying user interests, it is efficient to deliver marketing messages to the right customers. Since the style of clothing contains rich visual information such as color and shape, and the shape has symmetrical structure and asymmetrical structure, and users with different backgrounds have different feelings on clothes, therefore affecting their way of dress. In this paper, we propose a new method to model user preference jointly with user review information and image region-level features to make more accurate recommendations. Specifically, the proposed method is based on scene images to learn the compatibility from fashion or interior design images. Extensive experiments have been conducted on several large-scale real-world datasets consisting of millions of users/items and hundreds of millions of interactions. Extensive experiments indicate that the proposed method effectively improves the performance of items prediction as well as of outfits matching
Learning context-aware outfit recommendation
With the rapid development and increasing popularity of online shopping for fashion products, fashion recommendation plays an important role in daily online shopping scenes. Fashion is not only a commodity that is bought and sold but is also a visual language of sign, a nonverbal communication medium that exists between the wearers and viewers in a community. The key to fashion recommendation is to capture the semantics behind customers’ fit feedback as well as fashion visual style. Existing methods have been developed with the item similarity demonstrated by user interactions like ratings and purchases. By identifying user interests, it is efficient to deliver marketing messages to the right customers. Since the style of clothing contains rich visual information such as color and shape, and the shape has symmetrical structure and asymmetrical structure, and users with different backgrounds have different feelings on clothes, therefore affecting their way of dress. In this paper, we propose a new method to model user preference jointly with user review information and image region-level features to make more accurate recommendations. Specifically, the proposed method is based on scene images to learn the compatibility from fashion or interior design images. Extensive experiments have been conducted on several large-scale real-world datasets consisting of millions of users/items and hundreds of millions of interactions. Extensive experiments indicate that the proposed method effectively improves the performance of items prediction as well as of outfits matching
Computational Technologies for Fashion Recommendation: A Survey
Fashion recommendation is a key research field in computational fashion
research and has attracted considerable interest in the computer vision,
multimedia, and information retrieval communities in recent years. Due to the
great demand for applications, various fashion recommendation tasks, such as
personalized fashion product recommendation, complementary (mix-and-match)
recommendation, and outfit recommendation, have been posed and explored in the
literature. The continuing research attention and advances impel us to look
back and in-depth into the field for a better understanding. In this paper, we
comprehensively review recent research efforts on fashion recommendation from a
technological perspective. We first introduce fashion recommendation at a macro
level and analyse its characteristics and differences with general
recommendation tasks. We then clearly categorize different fashion
recommendation efforts into several sub-tasks and focus on each sub-task in
terms of its problem formulation, research focus, state-of-the-art methods, and
limitations. We also summarize the datasets proposed in the literature for use
in fashion recommendation studies to give readers a brief illustration.
Finally, we discuss several promising directions for future research in this
field. Overall, this survey systematically reviews the development of fashion
recommendation research. It also discusses the current limitations and gaps
between academic research and the real needs of the fashion industry. In the
process, we offer a deep insight into how the fashion industry could benefit
from fashion recommendation technologies. the computational technologies of
fashion recommendation
ICAR: Image-based Complementary Auto Reasoning
Scene-aware Complementary Item Retrieval (CIR) is a challenging task which
requires to generate a set of compatible items across domains. Due to the
subjectivity, it is difficult to set up a rigorous standard for both data
collection and learning objectives. To address this challenging task, we
propose a visual compatibility concept, composed of similarity (resembling in
color, geometry, texture, and etc.) and complementarity (different items like
table vs chair completing a group). Based on this notion, we propose a
compatibility learning framework, a category-aware Flexible Bidirectional
Transformer (FBT), for visual "scene-based set compatibility reasoning" with
the cross-domain visual similarity input and auto-regressive complementary item
generation. We introduce a "Flexible Bidirectional Transformer (FBT)"
consisting of an encoder with flexible masking, a category prediction arm, and
an auto-regressive visual embedding prediction arm. And the inputs for FBT are
cross-domain visual similarity invariant embeddings, making this framework
quite generalizable. Furthermore, our proposed FBT model learns the
inter-object compatibility from a large set of scene images in a
self-supervised way. Compared with the SOTA methods, this approach achieves up
to 5.3% and 9.6% in FITB score and 22.3% and 31.8% SFID improvement on fashion
and furniture, respectively
Dressing as a Whole: Outfit Compatibility Learning Based on Node-wise Graph Neural Networks
With the rapid development of fashion market, the customers' demands of
customers for fashion recommendation are rising. In this paper, we aim to
investigate a practical problem of fashion recommendation by answering the
question "which item should we select to match with the given fashion items and
form a compatible outfit". The key to this problem is to estimate the outfit
compatibility. Previous works which focus on the compatibility of two items or
represent an outfit as a sequence fail to make full use of the complex
relations among items in an outfit. To remedy this, we propose to represent an
outfit as a graph. In particular, we construct a Fashion Graph, where each node
represents a category and each edge represents interaction between two
categories. Accordingly, each outfit can be represented as a subgraph by
putting items into their corresponding category nodes. To infer the outfit
compatibility from such a graph, we propose Node-wise Graph Neural Networks
(NGNN) which can better model node interactions and learn better node
representations. In NGNN, the node interaction on each edge is different, which
is determined by parameters correlated to the two connected nodes. An attention
mechanism is utilized to calculate the outfit compatibility score with learned
node representations. NGNN can not only be used to model outfit compatibility
from visual or textual modality but also from multiple modalities. We conduct
experiments on two tasks: (1) Fill-in-the-blank: suggesting an item that
matches with existing components of outfit; (2) Compatibility prediction:
predicting the compatibility scores of given outfits. Experimental results
demonstrate the great superiority of our proposed method over others.Comment: 11 pages, accepted by the 2019 World Wide Web Conference (WWW-2019
- …