396 research outputs found

    Outfit Recommender System

    Get PDF
    The online apparel retail market size in the United States is worth about seventy-two billion US dollars. Recommendation systems on retail websites generate a lot of this revenue. Thus, improving recommendation systems can increase their revenue. Traditional recommendations for clothes consisted of lexical methods. However, visual-based recommendations have gained popularity over the past few years. This involves processing a multitude of images using different image processing techniques. In order to handle such a vast quantity of images, deep neural networks have been used extensively. With the help of fast Graphics Processing Units, these networks provide results which are extremely accurate, within a small amount of time. However, there are still ways in which recommendations for clothes can be improved. We propose an event-based clothing recommendation system which uses object detection. We train a model to identify nine events/scenarios that a user might attend: White Wedding, Indian Wedding, Conference, Funeral, Red Carpet, Pool Party, Birthday, Graduation and Workout. We train another model to detect clothes out of fifty-three categories of clothes worn at the event. Object detection gives a mAP of 84.01. Nearest neighbors of the clothes detected are recommended to the user

    A Federated Approach for Fine-Grained Classification of Fashion Apparel

    Full text link
    As online retail services proliferate and are pervasive in modern lives, applications for classifying fashion apparel features from image data are becoming more indispensable. Online retailers, from leading companies to start-ups, can leverage such applications in order to increase profit margin and enhance the consumer experience. Many notable schemes have been proposed to classify fashion items, however, the majority of which focused upon classifying basic-level categories, such as T-shirts, pants, skirts, shoes, bags, and so forth. In contrast to most prior efforts, this paper aims to enable an in-depth classification of fashion item attributes within the same category. Beginning with a single dress, we seek to classify the type of dress hem, the hem length, and the sleeve length. The proposed scheme is comprised of three major stages: (a) localization of a target item from an input image using semantic segmentation, (b) detection of human key points (e.g., point of shoulder) using a pre-trained CNN and a bounding box, and (c) three phases to classify the attributes using a combination of algorithmic approaches and deep neural networks. The experimental results demonstrate that the proposed scheme is highly effective, with all categories having average precision of above 93.02%, and outperforms existing Convolutional Neural Networks (CNNs)-based schemes.Comment: 11 pages, 4 figures, 5 tables, submitted to IEEE ACCESS (under review

    Multi-modal joint embedding for fashion product retrieval

    Get PDF
    © 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Finding a product in the fashion world can be a daunting task. Everyday, e-commerce sites are updating with thousands of images and their associated metadata (textual information), deepening the problem, akin to finding a needle in a haystack. In this paper, we leverage both the images and textual meta-data and propose a joint multi-modal embedding that maps both the text and images into a common latent space. Distances in the latent space correspond to similarity between products, allowing us to effectively perform retrieval in this latent space, which is both efficient and accurate. We train this embedding using large-scale real world e-commerce data by both minimizing the similarity between related products and using auxiliary classification networks to that encourage the embedding to have semantic meaning. We compare against existing approaches and show significant improvements in retrieval tasks on a large-scale e-commerce dataset. We also provide an analysis of the different metadata.Peer ReviewedPostprint (author's final draft

    The FASHION Visual Search using Deep Learning Approach

    Get PDF
    In recent years, the World Wide Web (WWW) has established itself as a popular source of information. Using an effective approach to investigate the vast amount of information available on the internet is essential if we are to make the most of the resources available. Visual data cannot be indexed using text-based indexing algorithms because it is significantly larger and more complex than text. Content-Based Image Retrieval, as a result, has gained widespread attention among the scientific community (CBIR). Input into a CBIR system that is dependent on visible features of the user\u27s input image at a low level is difficult for the user to formulate, especially when the system is reliant on visible features at a low level because it is difficult for the user to formulate. In addition, the system does not produce adequate results. To improve task performance, the CBIR system heavily relies on research into effective feature representations and appropriate similarity measures, both of which are currently being conducted. In particular, the semantic chasm that exists between low-level pixels in images and high-level semantics as interpreted by humans has been identified as the root cause of the issue. There are two potentially difficult issues that the e-commerce industry is currently dealing with, and the study at hand addresses them. First, handling manual labeling of products as well as second uploading product photographs to the platform for sale are two issues that merchants must contend with. Consequently, it does not appear in the search results as a result of misclassifications. Moreover, customers who don\u27t know the exact keywords but only have a general idea of what they want to buy may encounter a bottleneck when placing their orders. By allowing buyers to click on a picture of an object and search for related products without having to type anything in, an image-based search algorithm has the potential to unlock the full potential of e-commerce and allow it to reach its full potential. Inspired by the current success of deep learning methods for computer vision applications, we set out to test a cutting-edge deep learning method known as the Convolutional Neural Network (CNN) for investigating feature representations and similarity measures. We were motivated to do so by the current success of deep learning methods for computer vision applications (CV). According to the experimental results presented in this study, a deep machine learning approach can be used to address these issues effectively. In this study, a proposed Deep Fashion Convolution Neural Network (DFCNN) model that takes advantage of transfer learning features is used to classify fashion products and predict their performance. The experimental results for image-based search reveal improved performance for the performance parameters that were evaluated
    • …
    corecore