3,029 research outputs found

    TransNFCM: Translation-Based Neural Fashion Compatibility Modeling

    Full text link
    Identifying mix-and-match relationships between fashion items is an urgent task in a fashion e-commerce recommender system. It will significantly enhance user experience and satisfaction. However, due to the challenges of inferring the rich yet complicated set of compatibility patterns in a large e-commerce corpus of fashion items, this task is still underexplored. Inspired by the recent advances in multi-relational knowledge representation learning and deep neural networks, this paper proposes a novel Translation-based Neural Fashion Compatibility Modeling (TransNFCM) framework, which jointly optimizes fashion item embeddings and category-specific complementary relations in a unified space via an end-to-end learning manner. TransNFCM places items in a unified embedding space where a category-specific relation (category-comp-category) is modeled as a vector translation operating on the embeddings of compatible items from the corresponding categories. By this way, we not only capture the specific notion of compatibility conditioned on a specific pair of complementary categories, but also preserve the global notion of compatibility. We also design a deep fashion item encoder which exploits the complementary characteristic of visual and textual features to represent the fashion products. To the best of our knowledge, this is the first work that uses category-specific complementary relations to model the category-aware compatibility between items in a translation-based embedding space. Extensive experiments demonstrate the effectiveness of TransNFCM over the state-of-the-arts on two real-world datasets.Comment: Accepted in AAAI 2019 conferenc

    Computational Technologies for Fashion Recommendation: A Survey

    Full text link
    Fashion recommendation is a key research field in computational fashion research and has attracted considerable interest in the computer vision, multimedia, and information retrieval communities in recent years. Due to the great demand for applications, various fashion recommendation tasks, such as personalized fashion product recommendation, complementary (mix-and-match) recommendation, and outfit recommendation, have been posed and explored in the literature. The continuing research attention and advances impel us to look back and in-depth into the field for a better understanding. In this paper, we comprehensively review recent research efforts on fashion recommendation from a technological perspective. We first introduce fashion recommendation at a macro level and analyse its characteristics and differences with general recommendation tasks. We then clearly categorize different fashion recommendation efforts into several sub-tasks and focus on each sub-task in terms of its problem formulation, research focus, state-of-the-art methods, and limitations. We also summarize the datasets proposed in the literature for use in fashion recommendation studies to give readers a brief illustration. Finally, we discuss several promising directions for future research in this field. Overall, this survey systematically reviews the development of fashion recommendation research. It also discusses the current limitations and gaps between academic research and the real needs of the fashion industry. In the process, we offer a deep insight into how the fashion industry could benefit from fashion recommendation technologies. the computational technologies of fashion recommendation

    Formalizing Multimedia Recommendation through Multimodal Deep Learning

    Full text link
    Recommender systems (RSs) offer personalized navigation experiences on online platforms, but recommendation remains a challenging task, particularly in specific scenarios and domains. Multimodality can help tap into richer information sources and construct more refined user/item profiles for recommendations. However, existing literature lacks a shared and universal schema for modeling and solving the recommendation problem through the lens of multimodality. This work aims to formalize a general multimodal schema for multimedia recommendation. It provides a comprehensive literature review of multimodal approaches for multimedia recommendation from the last eight years, outlines the theoretical foundations of a multimodal pipeline, and demonstrates its rationale by applying it to selected state-of-the-art approaches. The work also conducts a benchmarking analysis of recent algorithms for multimedia recommendation within Elliot, a rigorous framework for evaluating recommender systems. The main aim is to provide guidelines for designing and implementing the next generation of multimodal approaches in multimedia recommendation

    Leveraging Multimodal Features and Item-level User Feedback for Bundle Construction

    Full text link
    Automatic bundle construction is a crucial prerequisite step in various bundle-aware online services. Previous approaches are mostly designed to model the bundling strategy of existing bundles. However, it is hard to acquire large-scale well-curated bundle dataset, especially for those platforms that have not offered bundle services before. Even for platforms with mature bundle services, there are still many items that are included in few or even zero bundles, which give rise to sparsity and cold-start challenges in the bundle construction models. To tackle these issues, we target at leveraging multimodal features, item-level user feedback signals, and the bundle composition information, to achieve a comprehensive formulation of bundle construction. Nevertheless, such formulation poses two new technical challenges: 1) how to learn effective representations by optimally unifying multiple features, and 2) how to address the problems of modality missing, noise, and sparsity problems induced by the incomplete query bundles. In this work, to address these technical challenges, we propose a Contrastive Learning-enhanced Hierarchical Encoder method (CLHE). Specifically, we use self-attention modules to combine the multimodal and multi-item features, and then leverage both item- and bundle-level contrastive learning to enhance the representation learning, thus to counter the modality missing, noise, and sparsity problems. Extensive experiments on four datasets in two application domains demonstrate that our method outperforms a list of SOTA methods. The code and dataset are available at https://github.com/Xiaohao-Liu/CLHE

    Multi-modal Extreme Classification

    Full text link
    This paper develops the MUFIN technique for extreme classification (XC) tasks with millions of labels where datapoints and labels are endowed with visual and textual descriptors. Applications of MUFIN to product-to-product recommendation and bid query prediction over several millions of products are presented. Contemporary multi-modal methods frequently rely on purely embedding-based methods. On the other hand, XC methods utilize classifier architectures to offer superior accuracies than embedding only methods but mostly focus on text-based categorization tasks. MUFIN bridges this gap by reformulating multi-modal categorization as an XC problem with several millions of labels. This presents the twin challenges of developing multi-modal architectures that can offer embeddings sufficiently expressive to allow accurate categorization over millions of labels; and training and inference routines that scale logarithmically in the number of labels. MUFIN develops an architecture based on cross-modal attention and trains it in a modular fashion using pre-training and positive and negative mining. A novel product-to-product recommendation dataset MM-AmazonTitles-300K containing over 300K products was curated from publicly available amazon.com listings with each product endowed with a title and multiple images. On the all datasets MUFIN offered at least 3% higher accuracy than leading text-based, image-based and multi-modal techniques. Code for MUFIN is available at https://github.com/Extreme-classification/MUFI

    A Bibliometric Survey of Fashion Analysis using Artificial Intelligence

    Get PDF
    In the 21st century, clothing fashion has become an inevitable part of every individual human as it is considered a way to express their personality to the outside world. Currently the traditional fashion business models are experiencing a paradigm shift from being an experience-based business strategy implementation to a data driven intelligent business improvisation. Artificial Intelligence is acting as a catalyst to achieve the infusion of data intelligence into the fashion industry which aims at fostering all the business brackets such as supply chain management, trend analysis, fashion recommendation, sales forecasting, digitized shopping experience etc. The field of “Fashion AI\u27\u27 is still under research progress because the fashion data is a multifaceted entity which is available in any of the forms like an image, video, text and numerical values. Therefore, it becomes a challenging research arena. There is a paucity of a common study which can provide a bird’s eye view about the research efforts and directions. In this paper, the authors represent a bibliometric survey of the AI based fashion analysis domain based on the Scopus database. The study was conducted by retrieving 581 Scopus research papers published from 1975-2020 and analysed to find out critical insights such as publication volume, co-authorship networks, citation analysis, and demographic research distribution. The study revealed that significant contribution is made via concept propositions in conferences and some papers published in the journal. However, there is a scope of lots of research work in the direction of improving fashion industry with AI techniques

    A Bibliometric Survey of Fashion Analysis using Artificial Intelligence

    Get PDF
    In the 21st century, clothing fashion has become an inevitable part of every individual human as it is considered a way to express their personality to the outside world. Currently the traditional fashion business models are experiencing a paradigm shift from being an experience-based business strategy implementation to a data driven intelligent business improvisation. Artificial Intelligence is acting as a catalyst to achieve the infusion of data intelligence into the fashion industry which aims at fostering all the business brackets such as supply chain management, trend analysis, fashion recommendation, sales forecasting, digitized shopping experience etc. The field of “Fashion AI\u27\u27 is still under research progress because the fashion data is a multifaceted entity which is available in any of the forms like an image, video, text and numerical values. Therefore, it becomes a challenging research arena. There is a paucity of a common study which can provide a bird’s eye view about the research efforts and directions. In this paper, the authors represent a bibliometric survey of the AI based fashion analysis domain based on the Scopus database. The study was conducted by retrieving 581 Scopus research papers published from 1975-2020 and analysed to find out critical insights such as publication volume, co-authorship networks, citation analysis, and demographic research distribution. The study revealed that significant contribution is made via concept propositions in conferences and some papers published in the journal. However, there is a scope of lots of research work in the direction of improving fashion industry with AI techniques
    corecore